I have a camera and its K matrix (calibration matrix) also I have image of plane, I know the real points of the 4 corners and thier correspondence pixel. I know how to compute the H matrix if z=0 (H is homography matrix between Image and the real plane).
And Now I try to get the real point of the plane (3D point) with the rotation matrix and the transltion vector
I follow this paper :Calibrating an Overhead Video Camera by Raul Rojas in section 3 - 3.3.
My code is:
ImagePointsScreen=[16,8,1;505,55,1;505,248,1;44,301,1;];
screenImage=imread( 'screen.jpg');
RealPointsMirror=[0,0,1;9,0,1;9,6,1;0,6,1]; %Mirror
RealPointsScreen=[0,0,1;47.5,0,1;47.5,20,1;0,20,1];%Screen
imagesc(screenImage);
hold on
for i=1:4
drawBubble(ImagePointsScreen(i,1),ImagePointsScreen(i,2),1,'g',int2str(i),'r')
end
Points3DScreen=Get3DpointSurface(RealPointsScreen,ImagePointsScreen,'Screen');
figure
hold on
plot3(Points3DScreen(:,1),Points3DScreen(:,2),Points3DScreen(:,3));
for i=1:4
drawBubble(Points3DScreen(i,1),Points3DScreen(i,2),1,'g',int2str(i),'r')
end
function [ Points3D ] = Get3DpointSurface( RealPoints,ImagePoints,name)
M=zeros(8,9);
for i=1:4
M((i*2)-1,1:3)=-RealPoints(i,:);
M((i*2)-1,7:9)=RealPoints(i,:)*ImagePoints(i,1);
M(i*2,4:6)=-RealPoints(i,:);
M(i*2,7:9)=RealPoints(i,:)*ImagePoints(i,2);
end
[U S V] = svd(M);
X = V(:,end);
H(1,:)=X(1:3,1)';
H(2,:)=X(4:6,1)';
H(3,:)=X(7:9,1)';
K=[680.561906875074,0,360.536967117290;0,682.250270165388,249.568615725655;0,0,1;];
newRO=pinv(K)*H;
h1=newRO(1:3,1);
h2=newRO(1:3,2);
scaleFactor=(norm(h1)+norm(h2))/2;
newRO=newRO./scaleFactor;
r1=newRO(1:3,1);
r2=newRO(1:3,2);
r3=cross(r1,r2);
r3=r3/norm(r3);
R=[r1,r2,r3];
RInv=pinv(R);
O=-RInv*newRO(1:3,3);
M=K*[R,-R*O];
for i=1:4
res=pinv(M)* [ImagePoints(i,1),ImagePoints(i,2),1]';
res=res';
res=res*(1/res(1,4));
Points3D(i,:)=res';
end
Points3D(i+1,:)=Points3D(1,:); %just add the first point to the end of the array for draw square
end
My result is :
Now I have two problem :
1.The point 1 is at (0,0,0) and this is not the real location
2.the points are upside down
What I am doing worng?
A homography is normally the transform of a plane in two positions/rotations.
The position in camera coordinates of a plane is normally called pose or extrinsic parameters
opencv has a solvePnP() function which uses Ransac to estimate the position of a known plane.
ps. Sorry don't know the equivalent matlab but Bouguet has a matlab version of the openCV 3D functions on his site
I found the answer in the paper: Calibrating an Overhead Video Camera by Raul Rojas in section 3 - 3.3.
for the start: H=K^-1*H
Given four points in the image and their known coordinates in the world, the
matrix H can be recovered, up to a scaling factor . We know that the first
two columns of the rotation matrix R must be the first two columns of the
transformation matrix. Let us denote by h1, h2, and h3 the three columns of
the matrix H.Due to the scaling factor we then have that
xr1 = h1
and
xr2 = h2
Since |r1| = 1, then x= |h1|/|r1| = |h1| and x = |h2|/|r2| = |h2|. We can thus
compute the factor and eliminate it from the recovered matrix H. We just set
H'= H/x
In this way we recover the first two columns of the rotation matrix R.
The third column of R can be found remembering that any column in a rotation
matrix is the cross product of the other two columns (times the appropriate
plus or minus sign). In particular
r3 = r1 × r2
Therefore, we can recover from H the rotation matrix R. We can also recover
the translation vector (the position of the camera in field coordinates). Just
remember that
h'3 = −R^t
Therefore the position vector of the camera pin-hole t is given by
t = −R^-1 h3
Related
What I have done:
To create a novel view(right image) from left image(given), I used the formula for pure translation between views(from Zissermann book) as:
x' = x + K.t/Z
x'= [u'
v'
1];
x = [u
v
1];
K = [f 0 cx; 0 f cy;0 0 1];
t = [t1 t2 t3]^T;
Z - depth of pixel in left image
The images and camera matrix were taken from Middlebury stereo 2014 dataset.
By implementing this,
I have an image with holes(black regions) due to disocclusions.
What I need to do :
To fill these holes, several algorithms exist which modify the depth map of right view prior to warping.
Can you tell me how I can find depth map of my synthesized(right) view ?
results matlab
Above is the result I have got till now, please help!
My answer actually is a workaround from Priyamvadha's previous question and my answer to it.
If you have extrinsic/intrinsic parameters and 3D points, consider to reverse the process.
transform (as you did with my previous answer) all the 3D points in the right camera's reference system (use extrinsic R and t, "reversing" the transformation)
now that you have all the 3D coordinates in that system, remember that the Z value is strongly linked with the disparity
for each point and its Z coordinate, the disparity should be equal to
D = (b*f)/Z
with b the baseline and f the focal length from intrinsics.
You should have obtained syntethized disparity for your syntetized image. Link each disparity value with the correspondant projected point in the syntethized image.
Yes, I could have join all the passage and give you a unique formula, but it would not have meant anything for you.
PS if you have no point, there will be no depth for black holes in the image.
This is the first time I do the image processing. So I have a lot of questions:
I have two pictures which are taken from different position, one from the left and the other one from the right like the picture below.[![enter image description here][1]][1]
Step 1: Read images by using imread function
I1 = imread('DSC01063.jpg');
I2 = imread('DSC01064.jpg');
Step 2: Using camera calibrator app in matlab to get the cameraParameters
load cameraParams.mat
Step 3: Remove Lens Distortion by using undistortImage function
[I1, newOrigin1] = undistortImage(I1, cameraParams, 'OutputView', 'same');
[I2, newOrigin2] = undistortImage(I2, cameraParams, 'OutputView', 'same');
Step 4: Detect feature points by using detectSURFFeatures function
imagePoints1 = detectSURFFeatures(rgb2gray(I1), 'MetricThreshold', 600);
imagePoints2 = detectSURFFeatures(rgb2gray(I2), 'MetricThreshold', 600);
Step 5: Extract feature descriptors by using extractFeatures function
features1 = extractFeatures(rgb2gray(I1), imagePoints1);
features2 = extractFeatures(rgb2gray(I2), imagePoints2);
Step 6: Match Features by using matchFeatures function
indexPairs = matchFeatures(features1, features2, 'MaxRatio', 1);
matchedPoints1 = imagePoints1(indexPairs(:, 1));
matchedPoints2 = imagePoints2(indexPairs(:, 2));
From there, how can I construct the 3D point cloud ??? In step 2, I used the checkerboard as in the picture attach to calibrate the camera[![enter image description here][2]][2]
The square size is 23 mm and from the cameraParams.mat I know the intrinsic matrix (or camera calibration matrix K) which has the form K=[alphax 0 x0; 0 alphay y0; 0 0 1].
I need to compute the Fundamental matrix F, Essential matrix E in order to calculate the camera matrices P1 and P2, right ???
After that when I have the camera matrices P1 and P2, I use the linear triangulation methods to estimate 3D point cloud. Is it the correct way??
I appreciate if you have any suggestion for me?
Thanks!
To triangulate the points you need the so called "camera matrices" and the points in 2D in each of the images (that you already have).
In Matlab you have the function triangulate, that does the job for you.
If you have calibrated the cameras, you shoudl have this information already. Anyways, you have here an example of how to create the "stereoParams" object needed for the triangulation.
Yes, that is the correct way. Now that you have matched points, you can use estimateFundamentalMatrix to compute the fundamental matrix F. Then you get the essential matrix E by multiplying F by extrinsics. Be careful about the order of multiplication, because the intrinsic matrix in cameraParameters is transposed relative to what you see in most textbooks.
Now, you have to decompose E into a rotation and a translation, from which you can construct the camera matrix for the second camera using cameraMatrix. You also need the camera matrix for the first camera, for which the rotation would be a 3x3 identity matrix, and translation will be a 3-element 0 vector.
Edit: there is now a cameraPose function in MATLAB, which computes an up-to-scale relative pose ('R' and 't') given the Fundamental matrix and the camera parameters.
There are two reasons for me to ask this question:
I want to know if my understanding on this issue is correct.
To clarify a doubt I have.
I want to change the co-ordinate system of a set of points (Old cartesian coordinates system to New cartesian co-ordinate system). This transformation will involve Translation as well as Rotation. This is what I plan to do:
With respect to this image I have a set of points which are in the XYZ coordinate system (Red). I want to change it with respect to the axes UVW (Purple). In order to do so, I have understood that there are two steps involved: Translation and Rotation.
When I translate, I only change the origin. (say, I want the UVW origin at (5,6,7). Then, for all points in my data, the x co-ordinates will be subtracted by 5, y by 6 and z by 7. By doing so. I get a set of Translated data.)
Now I have to apply a rotation transform (on the Translated data). The Rotation matrix is shown in the image. The values Ux, Uy and Uz are the co-ordinates of a point on the U axis which has unit distance from origin. Similarly, the values Vx, Vy and Vz are the coordinates of a point on the V axis which has a unit distance from origin. (I want to know if I am right here.) Wx, Wy, Wz is calculated as ((normalized u) X (normalised v))
(Also, if it serves any purpose, I would like to let you know that I am using MATLAB.)
edit:
I have a set of 42 points in 3D (42 X 3 matrix A) I want the first point to be considered as origin of UVW plane. So the values of the first point will be my translation vector. Correct?
Next, to calculate the Rotation vector: According to my requirement, the 6th row of matrix A has to be the U axis while 37th row has to be V axis. Consequently, vector u will be (1st row minus 6th row) of matrix A. While vector v will be (1st row minus 37th row) of matrix A.
The first row of Rotation Matrix will be vector u/|u| (normalized). Second row will be vector v/|v| (v normalized). The third row will be (u X v) . Am I right here?
Given this information, how can I calculate the value of Wx, Wy and Wx. How can I calculate the 3rd row of rotation matrix R?
Since you already have U and V, the two basis vectors of the orthonormal UVW system, the W basis vector would be the cross product of U and V. The cross product gives out the vector that is perpendicular to its operands; hence W = U × V. The components of W would fill in the third row of the rotation matrix.
Is my approach correct?
The order of the transforms matter; changing the order would lead to different results. When doing transformations of systems, usually scaling and rotation are tackled first and translation is dealt with lastly. The reason for this is that rotation would always be with respect to the origin. If the new system isn't on the old one's origin then applying rotation would rotate the new system not around its own origin but around the old system's origin. See the rightside case of figure 3-4 on this page to understand the difference what would happen if it's not on the origin; imagine the pot as the UVW coordinate system.
Think of both the coordinate systems being super-imposed (laid one atop the other). Now when you rotate UVW system with respect to the origin of XYZ, you will end up with the effect of rotating UVW w.r.t its own origin. Once rightly oriented, you can apply translation to it. However, if you'd already translated, then rotating would lead to translated rotation.
If you're using column-vector convention then TR would be the order i.e. rotation followed by translation. If you're using the row-vector convention then RT would be the order, again the order is rotation followed by translation.
You can apply the cross product of the Vectors OU and OV.
I think it's easier to perform it in steps. 1) Translation. 2) Rotation about x-axis. 3) Rotation about y-axis. 4) Rotation about z-axis.
% Assuming this is your coordinates before any operation
x0 = 5; y0 = 5; z0 = 5;
% This is the new origin
u = 5; v = 6, w = 7;
% If you wish to rotate pi/4 about x-axis, pi/3 about y-axis, pi/2 about z- axis, the three representative rotation matrix will be:
rx = [1 0 0; 0 cos(pi/4) -sin(pi/4); 0 sin(pi/4) cos(pi/4)];
ry = [cos(pi/3) 0 sin(pi/3); 0 1 0; -sin(pi/3) 0 cos(pi/3)];
rz = [cos(pi/2) -sin(pi/2) 0; sin(pi/2) cos(pi/2) 0; 0 0 1];
% First perform translation
xT = x0-u; yT = y0-v; zT = z0-w;
% Then perform rotation about x
rotated_x = mtimes( rx,[xT;yT;zT]);
% Then perform rotation about y
rotated_xy = mtimes( ry, rotated_x);
% Then perform rotation about z
rotated_xyz = mtimes( rz, rotated_xy);
Suppose I have 3+ coplanar but not collinear points in R^4. To find the 2D plane (not hyperplane) in which they all lie, I used the following plane fit algorithm from MatlabCentral:
function [n,V,p] = affine_fit(X)
% Computes the plane that fits best (least square of the normal distance
% to the plane) a set of sample points.
% INPUTS:
% X: a N by 3 matrix where each line is a sample point
%OUTPUTS:
%n : a unit (column) vector normal to the plane
%V : a 3 by 2 matrix. The columns of V form an orthonormal basis of the plane
%p : a point belonging to the plane
%NB: this code actually works in any dimension (2,3,4,...)
%Author: Adrien Leygue
%Date: August 30 2013
% the mean of the samples belongs to the plane
p = mean(X,1);
% The samples are reduced:
R = bsxfun(#minus,X,p);
% Computation of the principal directions of the samples cloud
[V,D] = eig(R'*R);
% Extract the output from the eigenvectors
n = V(:,1);
V = V(:,2:end);
end
I employed the algorithm in a higher dimension than specified, so X is a 4x4 matrix which holds 4 points in 4 coordinate dimensions. The generated output is something like this.
[n,V,p] = affine_fit(X);
n = -0.0252
-0.0112
0.9151
-0.4024
V = 0.9129 -0.3475 0.2126
0.3216 0.2954 -0.8995
0.1249 0.3532 0.1493
0.2180 0.8168 0.3512
p = -0.9125 1.0526 0.2325 -0.0621
What I want to do now is find out if other points of my choosing are part of the plane, too. I'm sure it's fairly easy given the information above, yet at this point I only know that I need two linear equations to describe a 2D plane in 4D or parametric equations of two variables. I can set them up in theory, but writing up the code has been problematic. Perhaps there is a more straightforward way to test this in matlab?
You can use the Matlab function pca (see for example here). For example, you can determine the basis of your plane, the normal vectors to your plane and a point m on the plane as follows:
coeff = pca(X);
basis = coeff(:,1:2);
normals = coeff(:,3:4);
m = mean(X);
To check if a point p lies in this plane, it suffices to verify that m-p is orthogonal (dot product equal to zero) to the normal vectors onto the plane using dot.
thats my first post, so please be kind.
I have a matrix with 3~10 coordinates and I want to connect these points to become a polygone with maximum size.
I tried fill() [1] to generate a plot but how do I calculate the area of this plot? Is there a way of converting the plot back to an matrix?
What would you reccomend me?
Thank you in advance!
[1]
x1 = [ 0.0, 0.5, 0.5 ];
y1 = [ 0.5, 0.5, 1.0 ];
fill ( x1, y1, 'r' );
[update]
Thank you for your answer MatlabDoug, but I think I did not formulate my question clear enough. I want to connect all of these points to become a polygone with maximum size.
Any new ideas?
x1 = rand(1,10)
y1 = rand(1,10)
vi = convhull(x1,y1)
polyarea(x1(vi),y1(vi))
fill ( x1(vi), y1(vi), 'r' );
hold on
plot(x1,y1,'.')
hold off
What is happening here is that CONVHULL is telling us which verticies (vi) are on the convex hull (the smallest polygon that encloses all the points). Knowing which ones are on the convex hull, we ask MATLAB for the area with POLYAREA.
Finally, we use your FILL command to draw the polygon, then PLOT to place the points on there for confirmation.
I second groovingandi's suggestion of trying all polygons; you just have to be sure to check the validity of the polygon (no self-intersections, etc).
Now, if you want to work with lots of points... As MatlabDoug pointed out, the convex hull is a good place to start. Notice that the convex hull gives a polygon whose area is the maximum possible. The problem, of course, is that there could be points in the interior of the hull that are not part of the polygon. I propose the following greedy algorithm, but I am not sure if it guarantees THE maximum area polygon.
The basic idea is to start with the convex hull as a candidate final polygon, and carve out triangles corresponding to the unused points until all the points belong to the final polygon. At each stage, the smallest possible triangle is removed.
Given: Points P = {p1, ... pN}, convex hull H = {h1, ..., hM}
where each h is a point that lies on the convex hull.
H is a subset of P, and it is also ordered such that adjacent
points in the list of H are edges of the convex hull, and the
first and last points form an edge.
Let Q = H
while(Q.size < P.size)
% For each point, compute minimum area triangle
T = empty heap of triangles with value of their area
For each P not in Q
For each edge E of Q
If triangle formed by P and E does not contain any other point
Add triangle(P,E) with value area(triangle(P,E))
% Modify the current polygon Q to carve out the triangle
Let t=(P,E) be the element of T with minimum area
Find the ordered pair of points that form the edge E within Q
(denote them Pa and Pb)
Replace the pair (Pa,Pb) with (Pa,E,Pb)
Now, in practice you don't need a heap for T, just append the data to four lists: one for P, one for Pa, one for Pb, and one for the area. To test if a point lies within a triangle, you only need to test each point against the lines forming the sides of the triangle, and you only need to test points not already in Q. Finally, to compute the area of the final polygon, you can triangulate it (like with the delaunay function, and sum up the areas of each triangle in the triangulation), or you can find the area of the convex hull, and subtract out the areas of the triangles as you carve them out.
Again, I don't know if this greedy algorithm is guaranteed to find the maximum area polygon, but I think it should work most of the time, and is interesting nonetheless.
You said you only have 3...10 points to connect. In this case, I suggest you just take all possible combinations, compute the areas with polyarea and take the biggest one.
Only if your number of points increases or if you have to compute it frequently so that compuation time matters, it's worth investing some time in a better algorithm. However I think it's difficult to come up with an algorithm and prove its completeness.
Finding the right order for the points is the hard part, as Amro commented. Does this function suffice?
function [idx] = Polyfy(x, y)
% [idx] = Polyfy(x, y)
% Given vectors x and y that contain pairs of points, find the order that
% joins them into a polygon. fill(x(idx),y(idx),'r') should show no holes.
%ensure column vectors
if (size(x,1) == 1)
x = x';
end
if (size(y,1) == 1)
y = y';
end
% vectors from centroid of points to each point
vx = x - mean(x);
vy = y - mean(y);
% unit vectors from centroid towards each point
v = (vx + 1i*vy)./abs(vx + 1i*vy);
vx = real(v);
vy = imag(v);
% rotate all unit vectors by first
rot = [vx(1) vy(1) ; -vy(1) vx(1)];
v = (rot*[vx vy]')';
% find angles from first vector to each vector
angles = atan2(v(:,2), v(:,1));
[angles, idx] = sort(angles);
end
The idea is to find the centroid of the points, then find vectors from the centroid to each point. You can think of these vectors as sides of triangles. The polygon is made up the set of triangles where each vector is used as the "left" and "right" only once, and no vectors are skipped. This boils down to ordering the vectors by angle around the centroid.
I chose to do this by normalizing the vectors to unit length, choosing one of them as a rotation vector, and rotating the rest. This allowed me to simply use atan2 to find the angles. There's probably a faster and/or more elegant way to do this, but I was confusing myself with trig identities. Finally, sorting those angles provides the correct order for the points to form the desired polygon.
This is the test function:
function [x, y] = TestPolyArea(N)
x = rand(N,1);
y = rand(N,1);
[indexes] = Polyfy(x, y);
x2 = x(indexes);
y2 = y(indexes);
a = polyarea(x2, y2);
disp(num2str(a));
fill(x2, y2, 'r');
hold on
plot(x2, y2, '.');
hold off
end
You can get some pretty wild pictures by passing N = 100 or so!