I am working on sparse reconstruction using a calibrated stereo pair. This is the approach I have taken step by step:
1- I Calibrated my stereo cameras using the Stereo Camera Calibrator app in MATLAB.
2- I Took a pair of stereo images and Undistorted each image.
3- I Detect, extract, and match point features.
4- I Use the triangulate function in MATLAB to get 3D coordinates of the matched points by passing the stereoParametes object into triangulate. The resulting 3D coordinates are with respect to the optical center of camera 1 (Right camera) and it is in millimeters.
The problem is that the point clouds seems to be warped and curved towards the edges of the image. at first it seemed like a barrel distortion of the lenses to me. so I recalibrated the bumblebee XB3 cameras using MATLAB camera calibrator app. but this time I used 3 radial distortion coefficients and also included tangential and skew parameters. but the results are the same. I also tried Caltech's camera calibration toolbox but it had the same results as MATLAB. the radial distortion coefficients are similar in both toolboxes. also another problem is that the Z values in the point cloud are all negative but I am thinking that might be coming from the fact that I am using right camera as camera 1 and left camera as camera 2 as opposed to what MATLAB's coordinate system is in the link attached.
I have attached couple of pictures of 3D point cloud from both sparse and dense 3D reconstruction. I am not interetsed in Dense 3D but just wanted to do it to see if the problem still exist which it does. I believe that means the main problem is with the images and camera calibration rather than algorithms.
Now my questions are:
1- What is the main reason/reasons for having warped/curved 3D point clouds? is it camera calibration only or other steps may introduce error as well? how can I check on that?
2- Can you suggest another camera calibration toolbox besides MATLAB's and Caltech's? maybe one that is more suitable for radial distortions?
Thanks
Images:
Links:
coordinate system
Code:
clear
close all
clc
load('mystereoparams.mat');
I11 = imread('Right.tif');
I22 = imread('Left.tif');
figure, imshowpair(I11, I22, 'montage');
title('Pair of Original Images');
[I1, newOrigin1] = undistortImage(I11,stereoParams.CameraParameters1);
[I2, newOrigin2] = undistortImage(I22,stereoParams.CameraParameters2);
figure, imshowpair(I1, I2, 'montage');
title('Undistorted Images');
% Detect feature points
imagePoints1 = detectSURFFeatures(rgb2gray(I1), 'MetricThreshold', 600);
imagePoints2 = detectSURFFeatures(rgb2gray(I2), 'MetricThreshold', 600);
% Extract feature descriptors
features1 = extractFeatures(rgb2gray(I1), imagePoints1);
features2 = extractFeatures(rgb2gray(I2), imagePoints2);
% Visualize several extracted SURF features
figure;
imshow(I1);
title('1500 Strongest Feature Points from Image1');
hold on;
plot(selectStrongest(imagePoints1, 1500));
indexPairs = matchFeatures(features1, features2, 'MaxRatio', 0.4);
matchedPoints1 = imagePoints1(indexPairs(:, 1));
matchedPoints2 = imagePoints2(indexPairs(:, 2));
% Visualize correspondences
figure;
showMatchedFeatures(I1, I2, matchedPoints1, matchedPoints2,'montage');
title('Original Matched Features from Globe01 and Globe02');
% Transform matched points to the original image's coordinates
matchedPoints1.Location = bsxfun(#plus, matchedPoints1.Location, newOrigin1);
matchedPoints2.Location = bsxfun(#plus, matchedPoints2.Location, newOrigin2);
[Cloud, reprojErrors] = triangulate(matchedPoints1, matchedPoints2, stereoParams);
figure;plot3(Cloud(:,1),Cloud(:,2),Cloud(:,3),'b.');title('Point Cloud before noisy match removal');
xlabel('X'), ylabel('Y'), zlabel('Depth (Z) in mm')
% Eliminate noisy points
meanmean=mean(sqrt(sum(reprojErrors .^ 2, 2)))
standdev=std(sqrt(sum(reprojErrors .^ 2, 2)))
errorDists = max(sqrt(sum(reprojErrors.^2,2)),[],14);
validIdx = errorDists < meanmean+standdev;
tt1=find(Cloud(:,3)>0);
validIdx(tt1)=0;
tt2=find(abs(Cloud(:,3))>1800);
validIdx(tt2)=0;
tt3=find(abs(Cloud(:,3))<1000);
validIdx(tt3)=0;
points3D = Cloud(validIdx, :);
figure;plot3(points3D(:,1),points3D(:,2),points3D(:,3),'b.');title('Point Cloud after noisy match removal');
xlabel('X'), ylabel('Y'), zlabel('Depth (Z) in mm')
validPoints1 = matchedPoints1(validIdx, :);
validPoints2 = matchedPoints2(validIdx, :);
figure;
showMatchedFeatures(I1, I2, validPoints1,validPoints2,'montage');
title('Matched Features After Removing Noisy Matches');
% get the color of each reconstructed point
validPoints1 = round(validPoints1.Location);
numPixels = size(I1, 1) * size(I1, 2);
allColors = reshape(im2double(I1), [numPixels, 3]);
colorIdx = sub2ind([size(I1, 1), size(I1, 2)], validPoints1(:,2), ...
validPoints1(:, 1));
color = allColors(colorIdx, :);
% add green point representing the origin
points3D(end+1,:) = [0,0,0];
color(end+1,:) = [0,1,0];
% show images
figure('units','normalized','outerposition',[0 0 .5 .5])
subplot(1,2,1);
imshowpair(I1, I2, 'montage');
title('Original Images')
% plot point cloud
hAxes = subplot(1,2,2);
showPointCloud(points3D, color, 'Parent', hAxes, ...
'VerticalAxisDir', 'down', 'MarkerSize', 40);
xlabel('x-axis (mm)');
ylabel('y-axis (mm)');
zlabel('z-axis (mm)')
title('Reconstructed Point Cloud');
figure, scatter3(points3D(:,1),points3D(:,2),points3D(:,3),50,color,'fill')
xlabel('x-axis (mm)');ylabel('y-axis (mm)');zlabel('z-axis (mm)')
title('Final colored Reconstructed Point Cloud');
Your code looks right. The problem seems to be in the calibration. The fact that you get a warped image with 3 coefficients tells me that you may not have enough data points close to the edges of the image to estimate the distortion accurately. It is hard to see with your images, though. If you take a picture of a scene with many straight edges and undistort it, you would get a better idea.
So I would recommend taking more images with the checkerboard as close to the edges of the image as you can get. See if that helps.
Another thing to look at would be estimation errors. In R2014b the Stereo Camera Calibrator app can optionally return the standard error values for each of the estimated parameters. Those can give you confidence intervals and tell you whether you may need more data points. See this example.
Oh, and also make sure that your calibration images are not saved as jpeg. Please use a lossless format like tiff or png.
Related
I want to convert an image from Cartesian to Polar and to use it for opengl texture.
So I used matlab referring to the two articles below.
Link 1
Link 2
My code is exactly same with Link 2's anwser
% load image
img = imread('my_image.png');
% convert pixel coordinates from cartesian to polar
[h,w,~] = size(img);
[X,Y] = meshgrid((1:w)-floor(w/2), (1:h)-floor(h/2));
[theta,rho] = cart2pol(X, Y);
Z = zeros(size(theta));
% show pixel locations (subsample to get less dense points)
XX = X(1:8:end,1:4:end);
YY = Y(1:8:end,1:4:end);
tt = theta(1:8:end,1:4:end);
rr = rho(1:8:end,1:4:end);
subplot(121), scatter(XX(:),YY(:),3,'filled'), axis ij image
subplot(122), scatter(tt(:),rr(:),3,'filled'), axis ij square tight
% show images
figure
subplot(121), imshow(img), axis on
subplot(122), warp(theta, rho, Z, img), view(2), axis square
The result was exactly what I wanted, and I was very satisfied except for one thing. It's the area (red circled area) in the picture just below. Considering that the opposite side (blue circled area) is not, I think this part should also be filled. Because of this part is empty, so there is a problem when using it as a texture.
And I wonder how I can fill this part. Thank you.
(little difference from Link 2's answer code like degree<->radian and axis values. but i think it is not important.)
Those issues you show in your question happen because your algorithm is wrong.
What you did (push):
throw a grid on the source image
transform those points
try to plot these colored points and let MATLAB do some magic to make it look like a dense picture
Do it the other way around (pull):
throw a grid on the output
transform that backwards
sample the input at those points
The distinction is called "push" (into output) vs "pull" (from input). Only Pull gives proper results.
Very little MATLAB code is necessary. You just need pol2cart and interp2, and a meshgrid.
With interp2 you get to choose the interpolation (linear, cubic, ...). Nearest-neighbor interpolation leaves visible artefacts.
im = im2single(imread("PQFax.jpg"));
% center of polar map, manually picked
cx = 10 + 409/2;
cy = 7 + 413/2;
% output parameters
radius = 212;
dRho = 1;
dTheta = 2*pi / (2*pi * radius);
Thetas = pi/2 - (0:dTheta:2*pi);
Rhos = (0:dRho:radius);
% polar mesh
[Theta, Rho] = meshgrid(Thetas, Rhos);
% transform...
[Xq,Yq] = pol2cart(Theta, Rho);
% translate to sit on the circle's center
Xq = Xq + cx;
Yq = Yq + cy;
% sample image at those points
Ro = interp2(im(:,:,1), Xq,Yq, "cubic");
Go = interp2(im(:,:,2), Xq,Yq, "cubic");
Bo = interp2(im(:,:,3), Xq,Yq, "cubic");
Vo = cat(3, Ro, Go, Bo);
Vo = imrotate(Vo, 180);
imshow(Vo)
The other way around (get a "torus" from a "ribbon") is quite similar. Throw a meshgrid on the torus space, subtract center, transform from cartesian to polar, and use those to sample from the "ribbon" image into the "torus" image.
I'm more familiar with OpenCV than with MATLAB. Perhaps MATLAB has something like OpenCV's warpPolar(), or a generic remap(). In any case, the operation is trivial to do entirely "by hand" but there are enough supporting functions to take the heavy lifting off your hands (interp2, pol2cart, meshgrid).
1.- The white arcs tell that the used translation pol-cart introduces significant errors.
2.- Reversing the following script solves your question.
It's a script that goes from cart-pol without introducing errors or ignoring input data, which is what happens when such wide white arcs show up upon translation apparently correct.
clear all;clc;close all
clc,cla;
format long;
A=imread('shaffen dass.jpg');
[sz1 sz2 sz3]=size(A);
szx=sz2;szy=sz1;
A1=A(:,:,1);A2=A(:,:,2);A3=A(:,:,3); % working with binary maps or grey scale images this wouldn't be necessary
figure(1);imshow(A);
hold all;
Cx=floor(szx/2);Cy=floor(szy/2);
plot(Cx,Cy,'co'); % because observe image centre not centered
Rmin=80;Rmax=400; % radius search range for imfindcircles
[centers, radii]=imfindcircles(A,[Rmin Rmax],... % outer circle
'ObjectPolarity','dark','Sensitivity',0.9);
h=viscircles(centers,radii);
hold all; % inner circle
[centers2, radii2]=imfindcircles(A,[Rmin Rmax],...
'ObjectPolarity','bright');
h=viscircles(centers2,radii2);
% L=floor(.5*(radii+radii2)); % this is NOT the length X that should have the resulting XY morphed graph
L=floor(2*pi*radii); % expected length of the morphed graph
cx=floor(.5*(centers(1)+centers2(1))); % coordinates of averaged circle centres
cy=floor(.5*(centers(2)+centers2(2)));
plot(cx,cy,'r*'); % check avg centre circle is not aligned to figure centre
plot([cx 1],[cy 1],'r-.');
t=[45:360/L:404+1-360/L]; % if step=1 then we only get 360 points but need an amount of L points
% if angle step 1/L over minute waiting for for loop to finish
R=radii+5;x=R*sind(t)+cx;y=R*cosd(t)+cy; % build outer perimeter
hL1=plot(x,y,'m'); % axis equal;grid on;
% hold all;
% plot(hL1.XData,hL1.YData,'ro');
x_ref=hL1.XData;y_ref=hL1.YData;
% Sx=zeros(ceil(R),1);Sy=zeros(ceil(R),1);
Sx={};Sy={};
for k=1:1:numel(hL1.XData)
Lx=floor(linspace(x_ref(k),cx,ceil(R)));
Ly=floor(linspace(y_ref(k),cy,ceil(R)));
% plot(Lx,Ly,'go'); % check
% plot([cx x(k)],[cy y(k)],'r');
% L1=unique([Lx;Ly]','rows');
Sx=[Sx Lx'];Sy=[Sy Ly'];
end
sx=cell2mat(Sx);sy=cell2mat(Sy);
[s1 s2]=size(sx);
B1=uint8(zeros(s1,s2));
B2=uint8(zeros(s1,s2));
B3=uint8(zeros(s1,s2));
for n=1:1:s2
for k=1:1:s1
B1(k,n)=A1(sx(k,n),sy(k,n));
B2(k,n)=A2(sx(k,n),sy(k,n));
B3(k,n)=A3(sx(k,n),sy(k,n));
end
end
C=uint8(zeros(s1,s2,3));
C(:,:,1)=B1;
C(:,:,2)=B2;
C(:,:,3)=B3;
figure(2);imshow(C);
the resulting
3.- let me know if you'd like some assistance writing pol-cart from this script.
Regards
John BG
I am working in the Stereo Vision for the first time. I am trying to rectify the stereoImages. The following is the result
I can't understand why the image is getting cropped
The following is my code
% Read in the stereo pair of images.
I1 = imread('sceneReconstructionLeft.jpg');
I2 = imread('sceneReconstructionRight.jpg');
% Rectify the images.
[J1, J2] = rectifyStereoImages(I1, I2, stereoParams);
% Display the images before rectification.
figure;
imshow(stereoAnaglyph(I1, I2), 'InitialMagnification', 50);
title('Before Rectification');
% Display the images after rectification.
figure;
imshow(stereoAnaglyph(J1, J2), 'InitialMagnification', 50);
title('After Rectification');
I am trying to follow this guide
http://www.mathworks.com/help/vision/examples/stereo-calibration-and-scene-reconstruction.html
The images I used
Try doing the following:
[J1, J2] = rectifyStereoImages(I1, I2, stereoParams, 'OutputView', 'Full');
This way you will see the entire images.
By default, rectifyStereoImages crops the output images to only contain the overlap between the two frames. In this case the overlap is very small compared to the disparity.
What is happening here is that the baseline (distance between the cameras) is too wide, and the distance to the objects is too short. This results in a very large disparity, which will be hard to compute reliably. I suggest that you either move the cameras closer together, or move the cameras further away from the objects of interest, or both.
I'm trying to reconstruct the shape of a sail. I'm using the 3D sparse reconstruction method. I'm using two cameras with which I took two pictures. I managed to do the calibration of such cameras too. In the pictures it is possible to see the checkerboard and the code I wrote detects it properly.
Now, since my pictures are black and white and the quality of the cameras is quite low, I cannot use the detectFeatures method properly. Problems arise when I'm trying to use matchFeatures. To overcome this problem I decided to use instead a cpselect command. By doing so I can manually click on the features. The matching between points from the two views seems now to be correct. When I carry on with the code and try to reconstruct the 3D plot I get points all over the place. It seems deformed. The plot clearly does not represent the sail and I don't know why.
The code follows.
Thank you in advance
% % Load precomputed camera parameters
load IP_CalibrationCarlos.mat %Calibration feature
%
I1 = imread('/Users/riccardocamin/Documents/MATLAB/Frames/Scan1.1.jpg');
I2 = imread('/Users/riccardocamin/Documents/MATLAB/Frames/Scan2.1.jpg');
%
[I1, newOrigin1] = undistortImage(I1, cameraParameters, 'OutputView', 'full');
[I2, newOrigin2] = undistortImage(I2, cameraParameters, 'OutputView', 'full');
%
I1 = imcrop(I1, [80 10 1040 1300]); %Necessary so images have same size
I2 = imcrop(I2, [0 10 1067 1300]);
%
squareSize = 82; % checkerboard square size in millimeters
%
[imagePoints, boardSize, pairsUsed] = detectCheckerboardPoints(rgb2gray(I1), rgb2gray(I2));
[refPoints1, boardSize] = detectCheckerboardPoints(rgb2gray(I1));
[refPoints2, boardSize] = detectCheckerboardPoints(rgb2gray(I2));
%
% % Translate detected points back into the original image coordinates
refPoints1 = bsxfun(#plus, refPoints1, newOrigin1);
refPoints2 = bsxfun(#plus, refPoints2, newOrigin2);
%
worldPoints = generateCheckerboardPoints(boardSize, squareSize);
%
[R1, t1] = extrinsics(refPoints1, worldPoints, cameraParameters); %R = r t = translation
[R2, t2] = extrinsics(refPoints2, worldPoints, cameraParameters);
%
% % Calculate camera matrices using the |cameraMatrix| function.
cameraMatrix1 = cameraMatrix(cameraParameters, R1, t1);
cameraMatrix2 = cameraMatrix(cameraParameters, R2, t2);
%
cpselect(I1, I2); % Save them as 'matchedPoints1'and 'matchedPoints2'
%
indexPairs = matchFeatures(matchedPoints1, matchedPoints2);
% Visualize correspondences
figure;
showMatchedFeatures(I1, I2, matchedPoints1, matchedPoints2);
title('Matched Features');
%
[points3D] = triangulate(matchedPoints1, matchedPoints2, ...
cameraMatrix1, cameraMatrix2);
%
x = -points3D(:,1);
y = -points3D(:,2);
z = -points3D(:,3);
figure
scatter3(x,y,z, 25);
xlabel('X');
ylabel('Y');
zlabel('Z');
The first problem you have is that you are cropping the images. Once you do that, all your coordinates are off. You do not need to do that here, because the images do not need to be the same size.
The second question is how precisely did you select the matching points? From the picture you have posted, it seems that your matches can be off by a few pixels, which can result in a large reconstruction error. Can you try finding the centroids of the spots on the sail using regionprops?
Also, are the two cameras stationary relative to each other? If so, then you may be better off calibrating the stereo pair, and doing the dense reconstruction as in this example. In that case, the two images would have to be of the same size.
I have a stereo camera system and I am trying this MATLAB's Computer Vision toolbox example (http://www.mathworks.com/help/vision/ug/sparse-3-d-reconstruction-from-multiple-views.html) with my own images and camera calibration files. I used Caltech's camera calibration toolbox (http://www.vision.caltech.edu/bouguetj/calib_doc/).
First I tried each camera separately based on first example and found the intrinsic camera calibration matrices for each camera and saved them. I also undistorted the left and right images using Caltech toolbox. Therefore I commented out the code for that from MATLAB example.
Here are the instrinsic camera matrices:
K1=[1050 0 630;0 1048 460;0 0 1];
K2=[1048 0 662;0 1047 468;0 0 1];
BTW, these are the right and center lenses from bumblebee XB3 cameras.
Question: aren't they supposed to be the same?
Then I did stereo calibration based on fifth example. I saved the rotation matrix (R) and translation matrix (T) from that. Therefore I commented out the code for that from MATLAB example.
Here are the rotation and translation matrices:
R=[0.9999 -0.0080 -0.0086;0.0080 1 0.0048;0.0086 -0.0049 1];
T=[120.14 0.55 1.04];
Then I fed all these images and calibration files and camera matrices to the MATLAB example and tried to find the 3-D point cloud but the results are not promising. I am attaching the code here. I think here are two problems:
1- My epipolar constraint value is too large!(to the power of 16)
2- I am not sure about the camera matrices and how I calculated them from R, and T from Caltech toolbox!
P.S. as far as feature extraction goes that is fine.
would be great if someone can help.
clear
close all
clc
files = {'Left1.tif';'Right1.tif'};
for i = 1:numel(files)
files{i}=fullfile('...\sparse_matlab', files{i});
images(i).image = imread(files{i});
end
figure;
montage(files); title('Pair of Original Images')
% Intrinsic camera parameters
load('Calib_Results_Left.mat')
K1 = KK;
load('Calib_Results_Right.mat')
K2 = KK;
%Extrinsics using stereo calibration
load('Calib_Results_stereo.mat')
Rotation=R;
Translation=T';
images(1).CameraMatrix=[Rotation; Translation] * K1;
images(2).CameraMatrix=[Rotation; Translation] * K2;
% Detect feature points and extract SURF descriptors in images
for i = 1:numel(images)
%detect SURF feature points
images(i).points = detectSURFFeatures(rgb2gray(images(i).image),...
'MetricThreshold',600);
%extract SURF descriptors
[images(i).featureVectors,images(i).points] = ...
extractFeatures(rgb2gray(images(i).image),images(i).points);
end
% Visualize several extracted SURF features from the Left image
figure; imshow(images(1).image);
title('1500 Strongest Feature Points from Globe01');
hold on;
plot(images(1).points.selectStrongest(1500));
indexPairs = ...
matchFeatures(images(1).featureVectors, images(2).featureVectors,...
'Prenormalized', true,'MaxRatio',0.4) ;
matchedPoints1 = images(1).points(indexPairs(:, 1));
matchedPoints2 = images(2).points(indexPairs(:, 2));
figure;
% Visualize correspondences
showMatchedFeatures(images(1).image,images(2).image,matchedPoints1,matchedPoints2,'montage' );
title('Original Matched Features from Globe01 and Globe02');
% Set a value near zero, It will be used to eliminate matches that
% correspond to points that do not lie on an epipolar line.
epipolarThreshold = .05;
for k = 1:length(matchedPoints1)
% Compute the fundamental matrix using the example helper function
% Evaluate the epipolar constraint
epipolarConstraint =[matchedPoints1.Location(k,:),1]...
*helperCameraMatricesToFMatrix(images(1).CameraMatrix,images(2).CameraMatrix)...
*[matchedPoints2.Location(k,:),1]';
%%%% here my epipolarConstraint results are bad %%%%%%%%%%%%%
% Only consider feature matches where the absolute value of the
% constraint expression is less than the threshold.
valid(k) = abs(epipolarConstraint) < epipolarThreshold;
end
validpts1 = images(1).points(indexPairs(valid, 1));
validpts2 = images(2).points(indexPairs(valid, 2));
figure;
showMatchedFeatures(images(1).image,images(2).image,validpts1,validpts2,'montage');
title('Matched Features After Applying Epipolar Constraint');
% convert image to double format for plotting
doubleimage = im2double(images(1).image);
points3D = ones(length(validpts1),4); % store homogeneous world coordinates
color = ones(length(validpts1),3); % store color information
% For all point correspondences
for i = 1:length(validpts1)
% For all image locations from a list of correspondences build an A
pointInImage1 = validpts1(i).Location;
pointInImage2 = validpts2(i).Location;
P1 = images(1).CameraMatrix'; % Transpose to match the convention in
P2 = images(2).CameraMatrix'; % in [1]
A = [
pointInImage1(1)*P1(3,:) - P1(1,:);...
pointInImage1(2)*P1(3,:) - P1(2,:);...
pointInImage2(1)*P2(3,:) - P2(1,:);...
pointInImage2(2)*P2(3,:) - P2(2,:)];
% Compute the 3-D location using the smallest singular value from the
% singular value decomposition of the matrix A
[~,~,V]=svd(A);
X = V(:,end);
X = X/X(end);
% Store location
points3D(i,:) = X';
% Store pixel color for visualization
y = round(pointInImage1(1));
x = round(pointInImage1(2));
color(i,:) = squeeze(doubleimage(x,y,:))';
end
% add green point representing the origin
points3D(end+1,:) = [0,0,0,1];
color(end+1,:) = [0,1,0];
% show images
figure('units','normalized','outerposition',[0 0 .5 .5])
subplot(1,2,1); montage(files,'Size',[1,2]); title('Original Images')
% plot point-cloud
hAxes = subplot(1,2,2); hold on; grid on;
scatter3(points3D(:,1),points3D(:,2),points3D(:,3),50,color,'fill')
xlabel('x-axis (mm)');ylabel('y-axis (mm)');zlabel('z-axis (mm)')
view(20,24);axis equal;axis vis3d
set(hAxes,'XAxisLocation','top','YAxisLocation','left',...
'ZDir','reverse','Ydir','reverse');
grid on
title('Reconstructed Point Cloud');
First of all, the Computer Vision System Toolbox now includes a Camera Calibrator App for calibrating a single camera, and also support for programmatic stereo camera calibration. It would be easier for you to use those tools, because the example you are using and the Caltech Calibration Toolbox use somewhat different conventions.
The example uses the pre-multiply convention, i.e. row vector * matrix, while the Caltech toolbox uses the post-multiply convention (matrix * column vector). That means that if you do use the camera parameters from Caltech, you would have to transpose the intrinsic matrix and the rotation matrices. That could be the main cause of your problems.
As far as the intrinsics being different between your two cameras, that is perfectly normal. All cameras are slightly different.
It would also help to see the matched features that you've used for triangulation. Given that you are reconstructing an elongated object, it doesn't seem too surprising to see the reconstructed points form a line in 3D...
You could also try rectifying the images and doing a dense reconstruction, as in the example I've linked to above.
I need to know how to align an image in Matlab for further work.
for example I have the next license plate image and I want to recognize all
the digits.
my program works for straight images so, I need to align the image and then
preform the optical recognition system.
The method should be as much as universal that fits for all kinds of plates and in all kinds of angles.
EDIT: I tried to do this with Hough Transform but I didn't Succeed. anybody can help me do to this?
any help will be greatly appreciated.
The solution was first hinted at by #AruniRC in the comments, then implemented by #belisarius in Mathematica. The following is my interpretation in MATLAB.
The idea is basically the same: detect edges using Canny method, find prominent lines using Hough Transform, compute line angles, finally perform a Shearing Transform to align the image.
%# read and crop image
I = imread('http://i.stack.imgur.com/CJHaA.png');
I = I(:,1:end-3,:); %# remove small white band on the side
%# egde detection
BW = edge(rgb2gray(I), 'canny');
%# hough transform
[H T R] = hough(BW);
P = houghpeaks(H, 4, 'threshold',ceil(0.75*max(H(:))));
lines = houghlines(BW, T, R, P);
%# shearing transforma
slopes = vertcat(lines.point2) - vertcat(lines.point1);
slopes = slopes(:,2) ./ slopes(:,1);
TFORM = maketform('affine', [1 -slopes(1) 0 ; 0 1 0 ; 0 0 1]);
II = imtransform(I, TFORM);
Now lets see the results
%# show edges
figure, imshow(BW)
%# show accumlation matrix and peaks
figure, imshow(imadjust(mat2gray(H)), [], 'XData',T, 'YData',R, 'InitialMagnification','fit')
xlabel('\theta (degrees)'), ylabel('\rho'), colormap(hot), colorbar
hold on, plot(T(P(:,2)), R(P(:,1)), 'gs', 'LineWidth',2), hold off
axis on, axis normal
%# show image with lines overlayed, and the aligned/rotated image
figure
subplot(121), imshow(I), hold on
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1), xy(:,2), 'g.-', 'LineWidth',2);
end, hold off
subplot(122), imshow(II)
In Mathematica, using Edge Detection and Hough Transform:
If you are using some kind of machine learning toolbox for text recognition, try to learn from ALL plates - not only aligned ones. Recognition results should be equally well if you transform the plate or dont, since by transforming, no new informations according to the true number will enhance the image.
If all the images have a dark background like that one, you could binarize the image, fit lines to the top or bottom of the bright area and calculate an affine projection matrix from the line gradient.