I am working in the Stereo Vision for the first time. I am trying to rectify the stereoImages. The following is the result
I can't understand why the image is getting cropped
The following is my code
% Read in the stereo pair of images.
I1 = imread('sceneReconstructionLeft.jpg');
I2 = imread('sceneReconstructionRight.jpg');
% Rectify the images.
[J1, J2] = rectifyStereoImages(I1, I2, stereoParams);
% Display the images before rectification.
figure;
imshow(stereoAnaglyph(I1, I2), 'InitialMagnification', 50);
title('Before Rectification');
% Display the images after rectification.
figure;
imshow(stereoAnaglyph(J1, J2), 'InitialMagnification', 50);
title('After Rectification');
I am trying to follow this guide
http://www.mathworks.com/help/vision/examples/stereo-calibration-and-scene-reconstruction.html
The images I used
Try doing the following:
[J1, J2] = rectifyStereoImages(I1, I2, stereoParams, 'OutputView', 'Full');
This way you will see the entire images.
By default, rectifyStereoImages crops the output images to only contain the overlap between the two frames. In this case the overlap is very small compared to the disparity.
What is happening here is that the baseline (distance between the cameras) is too wide, and the distance to the objects is too short. This results in a very large disparity, which will be hard to compute reliably. I suggest that you either move the cameras closer together, or move the cameras further away from the objects of interest, or both.
Related
I have many medical images with dcm file format. There are circles in the photos (6 circles but the sixth is too small to see).
I want to calculate the radius of these circles very precisely. At least the radius of the first three or two circles I need. I use Hough transform for this. The results are not good for my purpose yet. How could I find these circles and their radius more precisely? The Images and my code are below.
clear,
close all,
imtool close all,
clc,
% I - raw image data
I = dicomread('193500.dcm');
% scale with [scale] = 1mm/Pixel
k = 253/1024;
% Wiener filter for noise reduction
I = wiener2(I,[6,6]);
% Optimizing the contrast
I = imadjust(I,[0.388 0.398]);
imshow(I)
% Check for circles
[centers, radii] = imfindcircles(I,[6 40],'ObjectPolarity','dark');
% representation of the circles
viscircles(centers, radii,'Color','b');
%
radii = sortrows(radii,'descend');
radii = k*radii;
radii(2:4)
Here is model for face detection. Bounding box is to track and detect face. Active contours need to be added to get exact shape of the face not its features. To do this we need to proceed with segmentation of the frames within the video. I applied segmentation that is done on the image however on the image you can choose where you want to initialize segmentation as it is a still image, with the video however it needs to be dynamic and faster in the same time as it will loop trought live images which are not stored anywhere. I want both the bounding box and active contours can anyone guide me how to achieve this? Here is the code so far:
tic;
clear all
close all
clc
%Create the face detector object.
faceDetector = vision.CascadeObjectDetector();
%Get the input device using image acquisition toolbox,resolution = 640x480 to improve performance
obj =imaq.VideoDevice('winvideo', 1, 'YUY2_320x240','ROI', [1 1 320 240]);
set(obj,'ReturnedColorSpace', 'rgb'); % Set obejct to RGB colours
figure('menubar','none','tag','webcam');
%preview (obj)
while (true)
frame=step(obj);
%----IT IS TO DO WITH THIS PART OF CODE
%m = zeros(size(frame,1),size(frame,2)); %-- create initial mask
%m(20:222,20:250) = 1; %show the specific image with the give parameters
%m = imresize(m,.5); % for fast compution
%seg = region_seg(frame, m, 300); %-- run segmentation
bbox=step(faceDetector,frame);
boxInserter = insertObjectAnnotation(frame,'rectangle',bbox,'Face Detected');
imshow(boxInserter,'border','tight');
f=findobj('tag','webcam');
if (isempty(f));
close(gcf)
break
end
pause(0.05)
end
release(obj)
toc;
Some example of what I want to do: http://groups.inf.ed.ac.uk/calvin/FastVideoSegmentation/
I need to find similar corners in image (for example: 4 corners of a rectangle, same corners, just different orientation?).
I have this code:
% read the image into MATLAB and convert it to grayscale
I = imread('s2.jpg');
Igray = rgb2gray(I);
figure, imshow(I);
% We can see that the image is noisy. We will clean it up with a few
% morphological operations
Ibw = im2bw(Igray,graythresh(Igray));
se = strel('line',3,90);
cleanI = imdilate(~Ibw,se);
figure, imshow(cleanI);
% Perform a Hough Transform on the image
% The Hough Transform identifies lines in an image
[H,theta,rho] = hough(cleanI);
peaks = houghpeaks(H,10);
lines = houghlines(Ibw,theta,rho,peaks);
figure, imshow(cleanI)
% Highlight (by changing color) the lines found by MATLAB
hold on
After running this code I convert my starting image into a binary image with:
binary = im2bw(I);
after this I get a product from those 2 binary images and I think I get corners..
product = binary .* cleanI;
now I imfuse this picture with grayscale starting picture and get this:
I dont know what to do to get only those 4 corners!
Ok second try. Below a code that does not finally do the job but it might help.
Edge identifies the contours and with regionprops you get the characteristica of each identified element. As soon as you know what characteristics your desired object has you can filter it and plot it. I went through the Areas in shapedata.Area and the 6th largest was the one you were searching for. If you combine Area with some of the other charateristica you might get the one you want. As I said not ideal and final but perhaps a start ...
clear all
close all
source = imread('Mobile Phone.jpg');
im = rgb2gray(source);
bw = edge(im ,'canny',[],sqrt(2));
shapedata=regionprops (bwlabel(bw,8),'all');
%index = find([shapedata.Area]== max([shapedata.Area]));
index = 213;
data = shapedata(index).PixelList;
figure
imshow(im)
hold on
plot(data(:,1),data(:,2),'ro');
I'm trying to reconstruct the shape of a sail. I'm using the 3D sparse reconstruction method. I'm using two cameras with which I took two pictures. I managed to do the calibration of such cameras too. In the pictures it is possible to see the checkerboard and the code I wrote detects it properly.
Now, since my pictures are black and white and the quality of the cameras is quite low, I cannot use the detectFeatures method properly. Problems arise when I'm trying to use matchFeatures. To overcome this problem I decided to use instead a cpselect command. By doing so I can manually click on the features. The matching between points from the two views seems now to be correct. When I carry on with the code and try to reconstruct the 3D plot I get points all over the place. It seems deformed. The plot clearly does not represent the sail and I don't know why.
The code follows.
Thank you in advance
% % Load precomputed camera parameters
load IP_CalibrationCarlos.mat %Calibration feature
%
I1 = imread('/Users/riccardocamin/Documents/MATLAB/Frames/Scan1.1.jpg');
I2 = imread('/Users/riccardocamin/Documents/MATLAB/Frames/Scan2.1.jpg');
%
[I1, newOrigin1] = undistortImage(I1, cameraParameters, 'OutputView', 'full');
[I2, newOrigin2] = undistortImage(I2, cameraParameters, 'OutputView', 'full');
%
I1 = imcrop(I1, [80 10 1040 1300]); %Necessary so images have same size
I2 = imcrop(I2, [0 10 1067 1300]);
%
squareSize = 82; % checkerboard square size in millimeters
%
[imagePoints, boardSize, pairsUsed] = detectCheckerboardPoints(rgb2gray(I1), rgb2gray(I2));
[refPoints1, boardSize] = detectCheckerboardPoints(rgb2gray(I1));
[refPoints2, boardSize] = detectCheckerboardPoints(rgb2gray(I2));
%
% % Translate detected points back into the original image coordinates
refPoints1 = bsxfun(#plus, refPoints1, newOrigin1);
refPoints2 = bsxfun(#plus, refPoints2, newOrigin2);
%
worldPoints = generateCheckerboardPoints(boardSize, squareSize);
%
[R1, t1] = extrinsics(refPoints1, worldPoints, cameraParameters); %R = r t = translation
[R2, t2] = extrinsics(refPoints2, worldPoints, cameraParameters);
%
% % Calculate camera matrices using the |cameraMatrix| function.
cameraMatrix1 = cameraMatrix(cameraParameters, R1, t1);
cameraMatrix2 = cameraMatrix(cameraParameters, R2, t2);
%
cpselect(I1, I2); % Save them as 'matchedPoints1'and 'matchedPoints2'
%
indexPairs = matchFeatures(matchedPoints1, matchedPoints2);
% Visualize correspondences
figure;
showMatchedFeatures(I1, I2, matchedPoints1, matchedPoints2);
title('Matched Features');
%
[points3D] = triangulate(matchedPoints1, matchedPoints2, ...
cameraMatrix1, cameraMatrix2);
%
x = -points3D(:,1);
y = -points3D(:,2);
z = -points3D(:,3);
figure
scatter3(x,y,z, 25);
xlabel('X');
ylabel('Y');
zlabel('Z');
The first problem you have is that you are cropping the images. Once you do that, all your coordinates are off. You do not need to do that here, because the images do not need to be the same size.
The second question is how precisely did you select the matching points? From the picture you have posted, it seems that your matches can be off by a few pixels, which can result in a large reconstruction error. Can you try finding the centroids of the spots on the sail using regionprops?
Also, are the two cameras stationary relative to each other? If so, then you may be better off calibrating the stereo pair, and doing the dense reconstruction as in this example. In that case, the two images would have to be of the same size.
I am working on sparse reconstruction using a calibrated stereo pair. This is the approach I have taken step by step:
1- I Calibrated my stereo cameras using the Stereo Camera Calibrator app in MATLAB.
2- I Took a pair of stereo images and Undistorted each image.
3- I Detect, extract, and match point features.
4- I Use the triangulate function in MATLAB to get 3D coordinates of the matched points by passing the stereoParametes object into triangulate. The resulting 3D coordinates are with respect to the optical center of camera 1 (Right camera) and it is in millimeters.
The problem is that the point clouds seems to be warped and curved towards the edges of the image. at first it seemed like a barrel distortion of the lenses to me. so I recalibrated the bumblebee XB3 cameras using MATLAB camera calibrator app. but this time I used 3 radial distortion coefficients and also included tangential and skew parameters. but the results are the same. I also tried Caltech's camera calibration toolbox but it had the same results as MATLAB. the radial distortion coefficients are similar in both toolboxes. also another problem is that the Z values in the point cloud are all negative but I am thinking that might be coming from the fact that I am using right camera as camera 1 and left camera as camera 2 as opposed to what MATLAB's coordinate system is in the link attached.
I have attached couple of pictures of 3D point cloud from both sparse and dense 3D reconstruction. I am not interetsed in Dense 3D but just wanted to do it to see if the problem still exist which it does. I believe that means the main problem is with the images and camera calibration rather than algorithms.
Now my questions are:
1- What is the main reason/reasons for having warped/curved 3D point clouds? is it camera calibration only or other steps may introduce error as well? how can I check on that?
2- Can you suggest another camera calibration toolbox besides MATLAB's and Caltech's? maybe one that is more suitable for radial distortions?
Thanks
Images:
Links:
coordinate system
Code:
clear
close all
clc
load('mystereoparams.mat');
I11 = imread('Right.tif');
I22 = imread('Left.tif');
figure, imshowpair(I11, I22, 'montage');
title('Pair of Original Images');
[I1, newOrigin1] = undistortImage(I11,stereoParams.CameraParameters1);
[I2, newOrigin2] = undistortImage(I22,stereoParams.CameraParameters2);
figure, imshowpair(I1, I2, 'montage');
title('Undistorted Images');
% Detect feature points
imagePoints1 = detectSURFFeatures(rgb2gray(I1), 'MetricThreshold', 600);
imagePoints2 = detectSURFFeatures(rgb2gray(I2), 'MetricThreshold', 600);
% Extract feature descriptors
features1 = extractFeatures(rgb2gray(I1), imagePoints1);
features2 = extractFeatures(rgb2gray(I2), imagePoints2);
% Visualize several extracted SURF features
figure;
imshow(I1);
title('1500 Strongest Feature Points from Image1');
hold on;
plot(selectStrongest(imagePoints1, 1500));
indexPairs = matchFeatures(features1, features2, 'MaxRatio', 0.4);
matchedPoints1 = imagePoints1(indexPairs(:, 1));
matchedPoints2 = imagePoints2(indexPairs(:, 2));
% Visualize correspondences
figure;
showMatchedFeatures(I1, I2, matchedPoints1, matchedPoints2,'montage');
title('Original Matched Features from Globe01 and Globe02');
% Transform matched points to the original image's coordinates
matchedPoints1.Location = bsxfun(#plus, matchedPoints1.Location, newOrigin1);
matchedPoints2.Location = bsxfun(#plus, matchedPoints2.Location, newOrigin2);
[Cloud, reprojErrors] = triangulate(matchedPoints1, matchedPoints2, stereoParams);
figure;plot3(Cloud(:,1),Cloud(:,2),Cloud(:,3),'b.');title('Point Cloud before noisy match removal');
xlabel('X'), ylabel('Y'), zlabel('Depth (Z) in mm')
% Eliminate noisy points
meanmean=mean(sqrt(sum(reprojErrors .^ 2, 2)))
standdev=std(sqrt(sum(reprojErrors .^ 2, 2)))
errorDists = max(sqrt(sum(reprojErrors.^2,2)),[],14);
validIdx = errorDists < meanmean+standdev;
tt1=find(Cloud(:,3)>0);
validIdx(tt1)=0;
tt2=find(abs(Cloud(:,3))>1800);
validIdx(tt2)=0;
tt3=find(abs(Cloud(:,3))<1000);
validIdx(tt3)=0;
points3D = Cloud(validIdx, :);
figure;plot3(points3D(:,1),points3D(:,2),points3D(:,3),'b.');title('Point Cloud after noisy match removal');
xlabel('X'), ylabel('Y'), zlabel('Depth (Z) in mm')
validPoints1 = matchedPoints1(validIdx, :);
validPoints2 = matchedPoints2(validIdx, :);
figure;
showMatchedFeatures(I1, I2, validPoints1,validPoints2,'montage');
title('Matched Features After Removing Noisy Matches');
% get the color of each reconstructed point
validPoints1 = round(validPoints1.Location);
numPixels = size(I1, 1) * size(I1, 2);
allColors = reshape(im2double(I1), [numPixels, 3]);
colorIdx = sub2ind([size(I1, 1), size(I1, 2)], validPoints1(:,2), ...
validPoints1(:, 1));
color = allColors(colorIdx, :);
% add green point representing the origin
points3D(end+1,:) = [0,0,0];
color(end+1,:) = [0,1,0];
% show images
figure('units','normalized','outerposition',[0 0 .5 .5])
subplot(1,2,1);
imshowpair(I1, I2, 'montage');
title('Original Images')
% plot point cloud
hAxes = subplot(1,2,2);
showPointCloud(points3D, color, 'Parent', hAxes, ...
'VerticalAxisDir', 'down', 'MarkerSize', 40);
xlabel('x-axis (mm)');
ylabel('y-axis (mm)');
zlabel('z-axis (mm)')
title('Reconstructed Point Cloud');
figure, scatter3(points3D(:,1),points3D(:,2),points3D(:,3),50,color,'fill')
xlabel('x-axis (mm)');ylabel('y-axis (mm)');zlabel('z-axis (mm)')
title('Final colored Reconstructed Point Cloud');
Your code looks right. The problem seems to be in the calibration. The fact that you get a warped image with 3 coefficients tells me that you may not have enough data points close to the edges of the image to estimate the distortion accurately. It is hard to see with your images, though. If you take a picture of a scene with many straight edges and undistort it, you would get a better idea.
So I would recommend taking more images with the checkerboard as close to the edges of the image as you can get. See if that helps.
Another thing to look at would be estimation errors. In R2014b the Stereo Camera Calibrator app can optionally return the standard error values for each of the estimated parameters. Those can give you confidence intervals and tell you whether you may need more data points. See this example.
Oh, and also make sure that your calibration images are not saved as jpeg. Please use a lossless format like tiff or png.