This is the first time I do the image processing. So I have a lot of questions:
I have two pictures which are taken from different position, one from the left and the other one from the right like the picture below.[![enter image description here][1]][1]
Step 1: Read images by using imread function
I1 = imread('DSC01063.jpg');
I2 = imread('DSC01064.jpg');
Step 2: Using camera calibrator app in matlab to get the cameraParameters
load cameraParams.mat
Step 3: Remove Lens Distortion by using undistortImage function
[I1, newOrigin1] = undistortImage(I1, cameraParams, 'OutputView', 'same');
[I2, newOrigin2] = undistortImage(I2, cameraParams, 'OutputView', 'same');
Step 4: Detect feature points by using detectSURFFeatures function
imagePoints1 = detectSURFFeatures(rgb2gray(I1), 'MetricThreshold', 600);
imagePoints2 = detectSURFFeatures(rgb2gray(I2), 'MetricThreshold', 600);
Step 5: Extract feature descriptors by using extractFeatures function
features1 = extractFeatures(rgb2gray(I1), imagePoints1);
features2 = extractFeatures(rgb2gray(I2), imagePoints2);
Step 6: Match Features by using matchFeatures function
indexPairs = matchFeatures(features1, features2, 'MaxRatio', 1);
matchedPoints1 = imagePoints1(indexPairs(:, 1));
matchedPoints2 = imagePoints2(indexPairs(:, 2));
From there, how can I construct the 3D point cloud ??? In step 2, I used the checkerboard as in the picture attach to calibrate the camera[![enter image description here][2]][2]
The square size is 23 mm and from the cameraParams.mat I know the intrinsic matrix (or camera calibration matrix K) which has the form K=[alphax 0 x0; 0 alphay y0; 0 0 1].
I need to compute the Fundamental matrix F, Essential matrix E in order to calculate the camera matrices P1 and P2, right ???
After that when I have the camera matrices P1 and P2, I use the linear triangulation methods to estimate 3D point cloud. Is it the correct way??
I appreciate if you have any suggestion for me?
Thanks!
To triangulate the points you need the so called "camera matrices" and the points in 2D in each of the images (that you already have).
In Matlab you have the function triangulate, that does the job for you.
If you have calibrated the cameras, you shoudl have this information already. Anyways, you have here an example of how to create the "stereoParams" object needed for the triangulation.
Yes, that is the correct way. Now that you have matched points, you can use estimateFundamentalMatrix to compute the fundamental matrix F. Then you get the essential matrix E by multiplying F by extrinsics. Be careful about the order of multiplication, because the intrinsic matrix in cameraParameters is transposed relative to what you see in most textbooks.
Now, you have to decompose E into a rotation and a translation, from which you can construct the camera matrix for the second camera using cameraMatrix. You also need the camera matrix for the first camera, for which the rotation would be a 3x3 identity matrix, and translation will be a 3-element 0 vector.
Edit: there is now a cameraPose function in MATLAB, which computes an up-to-scale relative pose ('R' and 't') given the Fundamental matrix and the camera parameters.
Related
My goal is to create a heatmap in Matlab using an existing floor plan, and manually collected RF RSSI values.
I have marked numerical values (~30 to ~80) on a floor plan using pen and paper after recording signal strength in various areas of a building. I am looking to plot this in a heatmap overlay to the original floor plan.
What I have done is the following:
Google. YouTube. Man pages. My research has landed me accomplishing the following:
Importing an image to Matlab by asking the user to select a photo
Write it to the imread() function
Set an array the size of the image, all zeroed out
This is currently done manually, but not a concern right now
Manually placing, heatmap(x,y) = val; in locations marked on the original plan
I have found the x,y pixel coordinates from Photoshops info pallet
used a Gaussian low pass filter on the points set by heatmap()
The filter argument includes the image dimensions in pixels
At this point, I try to display the filtered heatmap()-ed points over the original floor plan. I am using the same dimensions for the filtering and the array made from the image size, in pixels. However, I am not getting the heatmap points to overlay where I specified by hard coding it.
The code is as follows:
%Import an image***********************************************
%Ask a user to import the image they want
%**************************************************************
%The commented out line will not show the file type when prompted to select
%an image
%[fn,pn] = uigetfile({'*.TIFF,*.jpg,*.JPG,*.jpeg,*.bmp','Image files'}, 'Select an image');
%This line will select any file type, want to restrict in future
[fn,pn] = uigetfile({'*.*','Image files'}, 'Select an image');
importedImage = imread(fullfile(pn,fn));
%Create size for heat map**************************************
%Setting the size for the map, see comments below
%**************************************************************
%What if I wanted an arbitrary dimension
%Or better yet, get the dimensions from the imported file
heatMap = zeros(1512,1080);
%Manually placing the heatmap values along a grid
%Want to set zones for this, maybe plot out in excel and use the cells to
%define the image size?
heatMap(328,84) = .38;
heatMap(385,132) = .42;
heatMap(418,86) = .40;
heatMap(340,405) = .60;
heatMap(515,263) = .35;
heatMap(627,480) = .40;
heatMap(800,673) = .28;
heatMap(892,598) = .38;
heatMap(1020,540) = .33;
heatMap(1145,684) = .38;
heatMap(912,275) = .44;
heatMap(798,185) = .54;
%Generate the Map**********************************************
%Making the density and heat map
%**************************************************************
gaussiankernel = fspecial('gaussian', [1512 1080], 60);
density = imfilter(heatMap, gaussiankernel, 'replicate');
%imshow(density, []);
oMask = heatmap_overlay(importedImage, density, 'summer');
set(figure(1), 'Position', [0 0 1512 1080]);
imshow(oMask,[]);
colormap(summer);
colorbar;
Any idea why the overlayed filter is offset and not where specified?
This can be reproduced by any image 1512 x 1080
Please let me know if you want the original image used
The array was off.
What I did was the following:
Output the density map only
Verified dimension sizes in the Matlab Workspace
I have noticed the array for the density, heatMap, importedImage and oMask were not all n x m size.
heatMap and importedImage were n x m, and importedImage and oMask were m x n.
I swapped the dimensions to all be n x m and now the image overlays just fine.
Thank you for your attention first.
Recently I am trying to use the Matlab program provided by Andrea Fusiello1, Emanuele Trucco2, Alessandro Verri3 in the A compact algorithm for rectification of stereo pairs to rectify the pictures got from the two cameras in my research project about stereo calibration.
Though the Matlab code is not complex, how to get the projection matrixs of the two cameras still confused me.
I used the following Matlab code to get the Internal matrix and R and T of each camera. And I think I can get the projection matrix by using the formula: P = A1*[R|T]. However, as you can see in the picture, the consequence is strange.
So I think there is something wrong with the projection matrixs I got. Could anyone told me how to get the projection matrixs correctly?
matlab code:
numImages = 9;
files = cell(1, numImages);
for i = 1:numImages
files{i} = fullfile(matlabroot, 'toolbox', 'vision', 'visiondata', ...
'calibration', 'left', sprintf('left%d.bmp', i));
end
[imagePoints, boardSize] = detectCheckerboardPoints(files);
squareSize = 120;
worldPoints = generateCheckerboardPoints(boardSize, squareSize);
cameraParams = estimateCameraParameters(imagePoints, worldPoints);
imOrig = imread(fullfile(matlabroot, 'toolbox', 'vision', 'visiondata', ...
'calibration', 'left', 'left9.bmp'));
[imagePoints, boardSize] = detectCheckerboardPoints(imOrig);
[R, t] = extrinsics(imagePoints, worldPoints, cameraParams);
The consequence:
There is a built in function cameraMatrix in the Computer Vision System Toolbox to compute the camera projection matrix.
However, if you are trying to do stereo rectification, you should calibrate a stereo pair of cameras using Stereo Camera Calibrator app, and then use rectifyStereoImage function. See this example.
The thing to keep in mind is that the functions in the Computer Vision System Toolbox use the post-multiply convention, i.e. row vector times the matrix. Because of this, the rotation matrices and the camera projection matrix are transposes of their conterparts in Trucco and Veri, and the other textbooks. So the formula used by cameraMatrix is
P = [R;t] * K
So P ends up being 4-by-3, and not 3-by-4. This may explain why you are getting weird results.
I'm doing a piece of Matlab code for reconstruction of radial 3D NMR data. However, since the radon functions built into Matlab are only 2D, I will have to apply them twice. After the first iradon transform in my code I get out projections of the imaged object as they are supposed to look (from different angles). But after the second iradon transform I do not see a correct reconstruction of the object (just a lot of noise and some blurry stuff where the object should be).
My attempt at a solution is shown below.
The input data is a free induction decay or fid: fid(NP,nv,nv2) where NP is the number of projections, nv is the number of theta angle increments and nv2 is the number of phi angle increments.
Doing the ifft on the FID will give a sinogram of dimensions proj(NP,phi) for each theta angle.
Doing the first iradon gives filtered and unfiltered backprojections with dimensions I(r,x) for each theta angle. (so that I3 and I4 have dimensions (r,z,theta) )
Doing the last iradon transform should then render the reconstructed 3D image with dimensions I(x,y,z)
I3=[];
I4=[];
I5=[];
I6=[];
for k=1:1:nv2
FID = squeeze(fid(:,:,k));
proj=abs(fftshift(ifft(ifftshift(FID),[],1)));
I1 = imrotate(iradon(proj,theta,'v5cubic','none',1,2*NP),-90);
I2 = imrotate(iradon(proj,theta,'v5cubic','Ram-Lak',1,2*NP),-90);
I3(:,:,k) = I1;
I4(:,:,k) = I2;
end
for k=1:size(I3,2)
I5(:,:,k) = iradon(squeeze(I3(:,k,:)),phi,'v5cubic','none',1,2*NP);
I6(:,:,k) = iradon(squeeze(I4(:,k,:)),phi,'v5cubic','Ram-Lak',1,2*NP);
end
I have a stereo camera system and I am trying this MATLAB's Computer Vision toolbox example (http://www.mathworks.com/help/vision/ug/sparse-3-d-reconstruction-from-multiple-views.html) with my own images and camera calibration files. I used Caltech's camera calibration toolbox (http://www.vision.caltech.edu/bouguetj/calib_doc/).
First I tried each camera separately based on first example and found the intrinsic camera calibration matrices for each camera and saved them. I also undistorted the left and right images using Caltech toolbox. Therefore I commented out the code for that from MATLAB example.
Here are the instrinsic camera matrices:
K1=[1050 0 630;0 1048 460;0 0 1];
K2=[1048 0 662;0 1047 468;0 0 1];
BTW, these are the right and center lenses from bumblebee XB3 cameras.
Question: aren't they supposed to be the same?
Then I did stereo calibration based on fifth example. I saved the rotation matrix (R) and translation matrix (T) from that. Therefore I commented out the code for that from MATLAB example.
Here are the rotation and translation matrices:
R=[0.9999 -0.0080 -0.0086;0.0080 1 0.0048;0.0086 -0.0049 1];
T=[120.14 0.55 1.04];
Then I fed all these images and calibration files and camera matrices to the MATLAB example and tried to find the 3-D point cloud but the results are not promising. I am attaching the code here. I think here are two problems:
1- My epipolar constraint value is too large!(to the power of 16)
2- I am not sure about the camera matrices and how I calculated them from R, and T from Caltech toolbox!
P.S. as far as feature extraction goes that is fine.
would be great if someone can help.
clear
close all
clc
files = {'Left1.tif';'Right1.tif'};
for i = 1:numel(files)
files{i}=fullfile('...\sparse_matlab', files{i});
images(i).image = imread(files{i});
end
figure;
montage(files); title('Pair of Original Images')
% Intrinsic camera parameters
load('Calib_Results_Left.mat')
K1 = KK;
load('Calib_Results_Right.mat')
K2 = KK;
%Extrinsics using stereo calibration
load('Calib_Results_stereo.mat')
Rotation=R;
Translation=T';
images(1).CameraMatrix=[Rotation; Translation] * K1;
images(2).CameraMatrix=[Rotation; Translation] * K2;
% Detect feature points and extract SURF descriptors in images
for i = 1:numel(images)
%detect SURF feature points
images(i).points = detectSURFFeatures(rgb2gray(images(i).image),...
'MetricThreshold',600);
%extract SURF descriptors
[images(i).featureVectors,images(i).points] = ...
extractFeatures(rgb2gray(images(i).image),images(i).points);
end
% Visualize several extracted SURF features from the Left image
figure; imshow(images(1).image);
title('1500 Strongest Feature Points from Globe01');
hold on;
plot(images(1).points.selectStrongest(1500));
indexPairs = ...
matchFeatures(images(1).featureVectors, images(2).featureVectors,...
'Prenormalized', true,'MaxRatio',0.4) ;
matchedPoints1 = images(1).points(indexPairs(:, 1));
matchedPoints2 = images(2).points(indexPairs(:, 2));
figure;
% Visualize correspondences
showMatchedFeatures(images(1).image,images(2).image,matchedPoints1,matchedPoints2,'montage' );
title('Original Matched Features from Globe01 and Globe02');
% Set a value near zero, It will be used to eliminate matches that
% correspond to points that do not lie on an epipolar line.
epipolarThreshold = .05;
for k = 1:length(matchedPoints1)
% Compute the fundamental matrix using the example helper function
% Evaluate the epipolar constraint
epipolarConstraint =[matchedPoints1.Location(k,:),1]...
*helperCameraMatricesToFMatrix(images(1).CameraMatrix,images(2).CameraMatrix)...
*[matchedPoints2.Location(k,:),1]';
%%%% here my epipolarConstraint results are bad %%%%%%%%%%%%%
% Only consider feature matches where the absolute value of the
% constraint expression is less than the threshold.
valid(k) = abs(epipolarConstraint) < epipolarThreshold;
end
validpts1 = images(1).points(indexPairs(valid, 1));
validpts2 = images(2).points(indexPairs(valid, 2));
figure;
showMatchedFeatures(images(1).image,images(2).image,validpts1,validpts2,'montage');
title('Matched Features After Applying Epipolar Constraint');
% convert image to double format for plotting
doubleimage = im2double(images(1).image);
points3D = ones(length(validpts1),4); % store homogeneous world coordinates
color = ones(length(validpts1),3); % store color information
% For all point correspondences
for i = 1:length(validpts1)
% For all image locations from a list of correspondences build an A
pointInImage1 = validpts1(i).Location;
pointInImage2 = validpts2(i).Location;
P1 = images(1).CameraMatrix'; % Transpose to match the convention in
P2 = images(2).CameraMatrix'; % in [1]
A = [
pointInImage1(1)*P1(3,:) - P1(1,:);...
pointInImage1(2)*P1(3,:) - P1(2,:);...
pointInImage2(1)*P2(3,:) - P2(1,:);...
pointInImage2(2)*P2(3,:) - P2(2,:)];
% Compute the 3-D location using the smallest singular value from the
% singular value decomposition of the matrix A
[~,~,V]=svd(A);
X = V(:,end);
X = X/X(end);
% Store location
points3D(i,:) = X';
% Store pixel color for visualization
y = round(pointInImage1(1));
x = round(pointInImage1(2));
color(i,:) = squeeze(doubleimage(x,y,:))';
end
% add green point representing the origin
points3D(end+1,:) = [0,0,0,1];
color(end+1,:) = [0,1,0];
% show images
figure('units','normalized','outerposition',[0 0 .5 .5])
subplot(1,2,1); montage(files,'Size',[1,2]); title('Original Images')
% plot point-cloud
hAxes = subplot(1,2,2); hold on; grid on;
scatter3(points3D(:,1),points3D(:,2),points3D(:,3),50,color,'fill')
xlabel('x-axis (mm)');ylabel('y-axis (mm)');zlabel('z-axis (mm)')
view(20,24);axis equal;axis vis3d
set(hAxes,'XAxisLocation','top','YAxisLocation','left',...
'ZDir','reverse','Ydir','reverse');
grid on
title('Reconstructed Point Cloud');
First of all, the Computer Vision System Toolbox now includes a Camera Calibrator App for calibrating a single camera, and also support for programmatic stereo camera calibration. It would be easier for you to use those tools, because the example you are using and the Caltech Calibration Toolbox use somewhat different conventions.
The example uses the pre-multiply convention, i.e. row vector * matrix, while the Caltech toolbox uses the post-multiply convention (matrix * column vector). That means that if you do use the camera parameters from Caltech, you would have to transpose the intrinsic matrix and the rotation matrices. That could be the main cause of your problems.
As far as the intrinsics being different between your two cameras, that is perfectly normal. All cameras are slightly different.
It would also help to see the matched features that you've used for triangulation. Given that you are reconstructing an elongated object, it doesn't seem too surprising to see the reconstructed points form a line in 3D...
You could also try rectifying the images and doing a dense reconstruction, as in the example I've linked to above.
I have a camera and its K matrix (calibration matrix) also I have image of plane, I know the real points of the 4 corners and thier correspondence pixel. I know how to compute the H matrix if z=0 (H is homography matrix between Image and the real plane).
And Now I try to get the real point of the plane (3D point) with the rotation matrix and the transltion vector
I follow this paper :Calibrating an Overhead Video Camera by Raul Rojas in section 3 - 3.3.
My code is:
ImagePointsScreen=[16,8,1;505,55,1;505,248,1;44,301,1;];
screenImage=imread( 'screen.jpg');
RealPointsMirror=[0,0,1;9,0,1;9,6,1;0,6,1]; %Mirror
RealPointsScreen=[0,0,1;47.5,0,1;47.5,20,1;0,20,1];%Screen
imagesc(screenImage);
hold on
for i=1:4
drawBubble(ImagePointsScreen(i,1),ImagePointsScreen(i,2),1,'g',int2str(i),'r')
end
Points3DScreen=Get3DpointSurface(RealPointsScreen,ImagePointsScreen,'Screen');
figure
hold on
plot3(Points3DScreen(:,1),Points3DScreen(:,2),Points3DScreen(:,3));
for i=1:4
drawBubble(Points3DScreen(i,1),Points3DScreen(i,2),1,'g',int2str(i),'r')
end
function [ Points3D ] = Get3DpointSurface( RealPoints,ImagePoints,name)
M=zeros(8,9);
for i=1:4
M((i*2)-1,1:3)=-RealPoints(i,:);
M((i*2)-1,7:9)=RealPoints(i,:)*ImagePoints(i,1);
M(i*2,4:6)=-RealPoints(i,:);
M(i*2,7:9)=RealPoints(i,:)*ImagePoints(i,2);
end
[U S V] = svd(M);
X = V(:,end);
H(1,:)=X(1:3,1)';
H(2,:)=X(4:6,1)';
H(3,:)=X(7:9,1)';
K=[680.561906875074,0,360.536967117290;0,682.250270165388,249.568615725655;0,0,1;];
newRO=pinv(K)*H;
h1=newRO(1:3,1);
h2=newRO(1:3,2);
scaleFactor=(norm(h1)+norm(h2))/2;
newRO=newRO./scaleFactor;
r1=newRO(1:3,1);
r2=newRO(1:3,2);
r3=cross(r1,r2);
r3=r3/norm(r3);
R=[r1,r2,r3];
RInv=pinv(R);
O=-RInv*newRO(1:3,3);
M=K*[R,-R*O];
for i=1:4
res=pinv(M)* [ImagePoints(i,1),ImagePoints(i,2),1]';
res=res';
res=res*(1/res(1,4));
Points3D(i,:)=res';
end
Points3D(i+1,:)=Points3D(1,:); %just add the first point to the end of the array for draw square
end
My result is :
Now I have two problem :
1.The point 1 is at (0,0,0) and this is not the real location
2.the points are upside down
What I am doing worng?
A homography is normally the transform of a plane in two positions/rotations.
The position in camera coordinates of a plane is normally called pose or extrinsic parameters
opencv has a solvePnP() function which uses Ransac to estimate the position of a known plane.
ps. Sorry don't know the equivalent matlab but Bouguet has a matlab version of the openCV 3D functions on his site
I found the answer in the paper: Calibrating an Overhead Video Camera by Raul Rojas in section 3 - 3.3.
for the start: H=K^-1*H
Given four points in the image and their known coordinates in the world, the
matrix H can be recovered, up to a scaling factor . We know that the first
two columns of the rotation matrix R must be the first two columns of the
transformation matrix. Let us denote by h1, h2, and h3 the three columns of
the matrix H.Due to the scaling factor we then have that
xr1 = h1
and
xr2 = h2
Since |r1| = 1, then x= |h1|/|r1| = |h1| and x = |h2|/|r2| = |h2|. We can thus
compute the factor and eliminate it from the recovered matrix H. We just set
H'= H/x
In this way we recover the first two columns of the rotation matrix R.
The third column of R can be found remembering that any column in a rotation
matrix is the cross product of the other two columns (times the appropriate
plus or minus sign). In particular
r3 = r1 × r2
Therefore, we can recover from H the rotation matrix R. We can also recover
the translation vector (the position of the camera in field coordinates). Just
remember that
h'3 = −R^t
Therefore the position vector of the camera pin-hole t is given by
t = −R^-1 h3