When calibrating a fisheye camera using Matlab, I get the fisheye intrinsics parameters:
% Calibrate the camera using fisheye parameters
[cameraParams, imagesUsed, estimationErrors] = estimateFisheyeParameters(imagePoints, worldPoints, ...
[mrows, ncols], ...
'EstimateAlignment', true, ...
'WorldUnits', 'millimeters');
The output of this function are the Scaramuzza parameters, in a shape very different from the classical 3x3 intrinsics matrix.
From this intrinsic parameters, I would like to estimate the pose of one my calibration pattern. So far I found two solutions but I do not know which one is more accurate.
First, I found I can directly feed the extrinsics function with the current intrinsics :
% Extract intrinsics parameters
intrinsics = cameraParams.Intrinsics;
% Compute Rt matrix
[R,t] = extrinsics(imagePoints,worldPoints,intrinsics);
Looking inside the function, I can see this method uses the homography but kind of process by magic the Scaramuzza's intrinsics parameters. It is the same function used for both fisheye and non-fisheye models. Any idea the Scaramuzza parameters are processed here ?
Second solution is to use the function estimateWorldCameraPose which uses P3P and a RANSAC underneath. This function does not accept the raw fisheye parameters. One solution I found (https://fr.mathworks.com/matlabcentral/answers/548787-function-estimateworldcamerapose-or-extrinsics-for-fisheyeparameters-is-missing-is-it-possible?s_tid=answers_rc1-2_p2_MLT) uses the function undistortFisheyeImage as a workaround to extract the 3x3 intrinsic parameters :
[J,camIntrinsics] = undistortFisheyeImage(I,intrinsics)
Then I can feed the new intrinsics in the estimateWorldCameraPose.
Is this solution better ? How realiable this new intrinsics matrix is ?
In all calibration methods (either a normal lens or fisheye lens), the extrinsic estimation uses the undistorted image points. So, the matlab function "extrinsics" should have the undistortion part(undistortion using Scaramuzza's lens distortion model) as well.
Looking at the code "extrinsics", I see it is seperately handling the intrinsic when it is fisheye model.
if ~isa(cameraParams, 'fisheyeIntrinsics')
if isa(cameraParams, 'cameraParameters')
intrinsicParams = cameraParams;
else
intrinsicParams = cameraParams.CameraParameters;
end
intrinsics = intrinsicParams.IntrinsicMatrix;
[rotationMatrix, translationVector] = vision.internal.calibration.extrinsicsPlanar(...
imagePoints, worldPoints, intrinsics);
else
That is the only solution and only way to do that.
extrinsics function both accept fisheye and non-fisheye models,if accept fisheye model,the input argument imagePoints is distort points,while pin-hole(non-fisheye) model, it must accept undistort image points. it's internal implementation is both vision.internal.calibration.extrinsicsPlanar.m function.
function [R,T] = extrinsicsPlanar(imagePoints, worldPoints, intrinsics)
A = intrinsics';
% Compute homography.
H = fitgeotrans(worldPoints, imagePoints, 'projective');
H = H.T';
h1 = H(:, 1);
h2 = H(:, 2);
h3 = H(:, 3);
lambda = 1 / norm(A \ h1);
% Compute rotation
r1 = A \ (lambda * h1);
r2 = A \ (lambda * h2);
r3 = cross(r1, r2);
R = [r1'; r2'; r3'];
% R may not be a true rotation matrix because of noise in the data.
% Find the best rotation matrix to approximate R using SVD.
[U, ~, V] = svd(R);
R = U * V';
% Compute translation vector.
T = (A \ (lambda * h3))';
EXTRINSICS Compute camera extrinsics from a planar calibration pattern,so this function is suit for you.
Pixel point de-aliasing was performed using the hidden method in the fisheyeIntrinsics class before processing the fisheye pixel points(before extrinsicsPlanar function).
%------------------------------------------------------------------
% Determine the normalized 3D vector on the unit sphere
%------------------------------------------------------------------
function worldPoints = imageToNormalizedVector(this, imagePoints)
%imageToNormalizedVector Determine the normalized 3D vector on
%the unit sphere.
% worldPoints = imageToNormalizedWorld(intrinsics,imagePoints)
% maps image points to the normalized 3D vector emanating from
% the single effective viewpoint on the unit sphere.
%
% Inputs:
% -------
% intrinsics - fisheyeIntrinsics object.
%
% imagePoints - M-by-2 matrix containing [x, y]
% coordinates of image points. M is the
% number of points.
%
% Output:
% -------
% worldPoints - M-by-3 matrix containing corresponding
% [X,Y,Z] coordinates on the unit sphere.
points = vision.internal.inputValidation.checkAndConvertPoints(...
imagePoints, 'fisheyeIntrinsics', 'imagePoints');
if isa(points, 'single')
points = double(points);
end
center = double(this.DistortionCenter);
stretch = double(this.StretchMatrix);
coeffs = double(this.MappingCoeffsInternal);
% Convert image points to sensor coordinates
points(:, 1) = points(:, 1) - center(1);
points(:, 2) = points(:, 2) - center(2);
points = stretch \ points';
rho = sqrt(points(1, :).^2 + points(2, :).^2);
f = polyval(coeffs(end:-1:1), rho);
% Note, points could be invalid if f < 0
worldPoints = [points; f]';
nw = sqrt(sum(worldPoints.^2, 2));
nw(nw == 0) = eps;
worldPoints = worldPoints ./ horzcat(nw,nw, nw);
worldPoints = cast(worldPoints, class(imagePoints));
end
estimateWorldCameraPose function input argument worldPoints accept 3 columns positions. but you use coplaner coordinates, z=0,may be this function return status is 1 or 2. So your later solution may not better.
Related
Hello I'm having a difficulty using the split step Fourier method. Assuming I want to propagate a Gaussian in free space, I'm supposed to use:
A(x,z) = F^-1 [exp((i*k^2*z)/(2*k_0))* F[A(x,0)]]
where F is the Fourier and F^-1 is the inverse Fourier.
What I want to do is plot I(x,z=0), I(x,z=3) and the intensity distribution in the x-z plane.
I tried doing it numerically (plotting I(x,z=0), I(x,z=3)) using the following code:
lambda = 0.5*10^-6;
k0 = 2*pi/lambda;
w = 10*10^-6;
N=500;
a=0.4*10^-4;
dx=a/N;
x = -a/2:dx:a/2-dx;
Dk_x = 2*pi*N/a;
dk_x=2*pi/a;
k_x=-Dk_x/2:dk_x:Dk_x/2-dk_x;
N = (k_x.^2)/(2*k0);
z = 0:(5*10^-3)/length(N):5*10^-3;
z(end) = [];
% A0 = A(x,z=0)
A0 = exp(-x.^2/w^2);
I_0 = A0.*conj(A0);
% F_A0 is the fourier of A0
F_A0 = fft(A0);
% A3 = A(x,z=3)
A3 = ifft(exp(1i*N*3).*F_A0);
I_3 = A3.*conj(A3);
figure
plot(x,I_3,x,I_0)
However, I_3 is not what I expected to receive which is another Gaussian of smaller intensity.
Also, I'm unsure how I'd plot the intensity distribution in the x-z plane. They suggest using the imagesc function which I suppose I'd use it like:
imagesc(x,z, abs(ifft(exp(1i*N.*z).*F_A0).^2)
but the third argument, which is supposed to be a matrix, is a vector in what I've written..
Can anyone please help me with this?
Thank you in advance.
here's an example for a split step free Gaussian propagation:
N=2^9; % x grid points
L=100; % box length
dx=L/N; %position grid interval
x=(-L/2+1/N):dx:L/2; %define position grid (centered around origin)
dk=2*pi/L; %momentum grid interval
k=(-N/2+1:1:N/2).*dk; %define momentum grid
A_z=exp(-x.^2); % initial gaussian
dz=0.01; % propagation step
z=0:dz:3; % propagation vector
% do the propagation using split step
for n=1:numel(z)
A_z=ifft(fftshift( exp(1i*k.^2*dz).*fftshift(fft(A_z)) ));
I_z(n,:)=abs(A_z).^2;
end
imagesc(x,z,I_z)
xlim([-20 20]);
xlabel('x'); ylabel('z')
I have a set of 3D points (x,y,z) and I would like to fit a straight line using Least absolute deviation method to those data.
I found a function from the internet which works pretty well with 2D data, how could I modify this to adapt 3D data points?
function B = L1LinearRegression(X,Y)
% Determine size of predictor data
[n m] = size(X);
% Initialize with least-squares fit
B = [ones(n,1) X] \ Y;
% Least squares regression
BOld = B;
BOld(1) = BOld(1) + 1e-5;
% Force divergence
% Repeat until convergence
while (max(abs(B - BOld)) > 1e-6) % Move old coefficients
BOld = B; % Calculate new observation weights (based on residuals from old coefficients)
W = sqrt(1 ./ max(abs((BOld(1) + (X * BOld(2:end))) - Y),1e-6)); % Floor to avoid division by zero
% Calculate new coefficients
B = (repmat(W,[1 m+1]) .* [ones(n,1) X]) \ (W .* Y);
end
Thank you very much!
I know that this is not answer to the question but rather to different problem leading to the question.
We can use fit function several times.
% XYZ=[x(:),y(:),z(:)]; % suppose we have data in this format
M=size(XYZ,1); % read size of our data
t=((0:M-1)/(M-1))'; % create arbitrary parameter t
% fit all coordinates as function x_i=a_i*t+b_i
fitX=fit(t,XYZ(:,1),'poly1');
fitY=fit(t,XYZ(:,2),'poly1');
fitZ=fit(t,XYZ(:,3),'poly1');
temp=[0;1]; % define the interval where the line shall be plotted
%Evaluate and plot the line coordinates
Line=[feval(fitX(temp)),feval(fitY(temp)),feval(fitZ(temp))];
plot(Line)
The advantage is that this work for any cloud, even if it is parallel to any axis. another advantage is that you are not limitted only to polynomes of 1st order, you can choose any function for different axis and fit any 3D curve.
I have recently found an interesting article regarding image rectification for two stereo image pairs. I liked the algorithm because it was very compact and from what the article suggested it did the right thing. After I implemented the matlab version on two images, I didn't get a correct rectified image. I got an image that was pitch black apart from the left and down line which had pixels. In the image there also were some gray pixels from the original image but just a hand full. I posted below the matlab code, and the link to the article and also an example of the result I got for one image (for the other image it was the same)
This is the link to the article A compact algorithm for rectification of stereo pairs.
A screen shot with the initial images and the results is bellow:
The initial images are the following two(such that you do not have to search for another stereo pair) :
function [T1,T2,Pn1,Pn2] = rectify(Po1,Po2)
% RECTIFY: compute rectification matrices
% factorize old PPMs
[A1,R1,t1] = art(Po1);
[A2,R2,t2] = art(Po2);
% optical centers (unchanged)
c1 = - inv(Po1(:,1:3))*Po1(:,4);
c2 = - inv(Po2(:,1:3))*Po2(:,4);
% new x axis (= direction of the baseline)
v1 = (c1-c2);
% new y axes (orthogonal to new x and old z)
v2 = cross(R1(3,:)',v1);
% new z axes (orthogonal to baseline and y)
v3 = cross(v1,v2);
% new extrinsic parameters
R = [v1'/norm(v1)
v2'/norm(v2)
v3'/norm(v3)];
% translation is left unchanged
% new intrinsic parameters (arbitrary)
A = (A1 + A2)./2;
A(1,2)=0; % no skew
A(1,3) = A(1,3) + 160;
% new projection matrices
Pn1 = A * [R -R*c1 ];
Pn2 = A * [R -R*c2 ];
% rectifying image transformation
T1 = Pn1(1:3,1:3)* inv(Po1(1:3,1:3));
T2 = Pn2(1:3,1:3)* inv(Po2(1:3,1:3));
function [A,R,t] = art(P)
% ART: factorize a PPM as P=A*[R;t]
Q = inv(P(1:3, 1:3));
[U,B] = qr(Q);
R = inv(U);
t = B*P(1:3,4);
A = inv(B);
A = A ./A(3,3);
This is the "main" code from which I call my rectify function
img1 = imread('D:\imag1.png');
img2 = imread('D:\imag2.png');
im1 = rgb2gray(img1);
im2 = rgb2gray(img2);
im1 = im2double(im1);
im2 = im2double(im2);
figure; imshow(im1, 'border', 'tight')
figure; imshow(im2, 'border', 'tight')
%pair projection matrices obtained after the calibration P01,P02
a = double(9.765*(10^2))
b = double(5.790*(10^-1))
format bank;
Po1 = double([a 5.382*10 -2.398*(10^2) 3.875*(10^5);
9.849*10 9.333*(10^2) 1.574*(10^2) 2.428*(10^5);
b 1.108*(10^(-1)) 8.077*(10^(-1)) 1.118*(10^3)]);
Po2 = [9.767*(10^2) 5.376*10 -2.400*(10^2) 4.003*(10^4);
9.868*10 9.310*(10^2) 1.567*(10^2) 2.517*(10^5);
5.766*(10^(-1)) 1.141*(10^(-1)) 8.089*(10^(-1)) 1.174*(10^3)];
[T1, T2, Pn1, Pn2] = rectify(Po1, Po2);
imnoua = conv2(im1, T1);
imnoua2 = conv2(im2, T2);
fprintf('Imaginea noua e \n');
figure; imshow(imnoua, 'border', 'tight')
figure; imshow(imnoua2, 'border', 'tight')
Thank you for your time!
As Shai says, T1 and T2 are projective transformation matrices, not filter kernels. You should be using imwarp, rather than conv2:
imnoua = imwarp(im1, projective2d(T1));
imnoua2 = imwarp(im2, projective2d(T2));
Better yet, use rectifyStereoImages from the Computer Vision System Toolbox. Check out this example.
After constructing the point cloud I want to get the normal of each point and I used the built-in matlab function surfnorm but its takes a lot of processing time. So if anyone could assist me do this a better and more efficient way.
I wonder if the following code would help you. There are three steps here.
Create 500 randomly spaced points (x,y), and compute a corresponding value z (the height of the surface) for which I chose a sinc like function
Resample the random points using the TriScatteredInterp function - this permits me to obtain points on an evenly sampled grid that "roughly correspond" to the initial surface
Compute the normal to "some points" on that grid (since there are 480x640 points, computing the normal at every point would just create an impossibly dense "forest of vectors"; by sampling "every 10th point" you can actually see what you are doing
The code I used was as follows:
randomX = rand(1,500);
randomY = rand(1,500);
r = 5*sqrt(randomX.^2 + randomY.^2);
randomZ = sin(r) ./ r;
% resample the data:
[xx yy] = meshgrid(linspace(0,1,640), linspace(0,1,480));
F = TriScatteredInterp(randomX(:), randomY(:), randomZ(:));
zz = F(xx, yy);
%% at each point, the normal is cross product of vectors to neighbors
xyz=reshape([xx yy zz],[size(xx) 3]);
xv = 10:30:479; yv = 10:30:639; % points at which to compute normals
dx = xyz(xv, yv+1, :) - xyz(xv, yv, :);
dy = xyz(xv+1, yv, :) - xyz(xv, yv, :);
normVecs = cross(dx, dy); % here we compute the normals.
normVecs = normVecs ./ repmat(sqrt(sum(normVecs.^2, 3)), [1 1 3]);
figure;
quiver3(xx(xv, yv), yy(xv, yv), zz(xv, yv), ...
normVecs(:,:,1), normVecs(:,:,2), normVecs(:,:,3));
axis equal
view([56 22]);
And the resulting plot:
I am trying to convert an image from cartesian to polar coordinates.
I know how to do it explicitly using for loops, but I am looking for something more compact.
I want to do something like:
[x y] = size(CartImage);
minr = floor(min(x,y)/2);
r = linspace(0,minr,minr);
phi = linspace(0,2*pi,minr);
[r, phi] = ndgrid(r,phi);
PolarImage = CartImage(floor(r.*cos(phi)) + minr, floor(r.sin(phi)) + minr);
But this obviously doesn't work.
Basically I want to be able to index the CartImage on a grid.
The polar image would then be defined on the grid.
given a matrix M (just a 2d Gaussian for this example), and a known origin point (X0,Y0) from which the polar transform takes place, we expect that iso-intensity circles will transform to iso-intensity lines:
M=fspecial('gaussian',256,32); % generate fake image
X0=size(M,1)/2; Y0=size(M,2)/2;
[Y X z]=find(M);
X=X-X0; Y=Y-Y0;
theta = atan2(Y,X);
rho = sqrt(X.^2+Y.^2);
% Determine the minimum and the maximum x and y values:
rmin = min(rho); tmin = min(theta);
rmax = max(rho); tmax = max(theta);
% Define the resolution of the grid:
rres=128; % # of grid points for R coordinate. (change to needed binning)
tres=128; % # of grid points for theta coordinate (change to needed binning)
F = TriScatteredInterp(rho,theta,z,'natural');
%Evaluate the interpolant at the locations (rhoi, thetai).
%The corresponding value at these locations is Zinterp:
[rhoi,thetai] = meshgrid(linspace(rmin,rmax,rres),linspace(tmin,tmax,tres));
Zinterp = F(rhoi,thetai);
subplot(1,2,1); imagesc(M) ; axis square
subplot(1,2,2); imagesc(Zinterp) ; axis square
getting the wrong (X0,Y0) will show up as deformations in the transform, so be careful and check that.
I notice that the answer from bla is from polar to cartesian coordinates.
However the question is in the opposite direction.
I=imread('output.png'); %read image
I1=flipud(I);
A=imresize(I1,[1024 1024]);
A1=double(A(:,:,1));
A2=double(A(:,:,2));
A3=double(A(:,:,3)); %rgb3 channel to double
[m n]=size(A1);
[t r]=meshgrid(linspace(-pi,pi,n),1:m); %Original coordinate
M=2*m;
N=2*n;
[NN MM]=meshgrid((1:N)-n-0.5,(1:M)-m-0.5);
T=atan2(NN,MM);
R=sqrt(MM.^2+NN.^2);
B1=interp2(t,r,A1,T,R,'linear',0);
B2=interp2(t,r,A2,T,R,'linear',0);
B3=interp2(t,r,A3,T,R,'linear',0); %rgb3 channel Interpolation
B=uint8(cat(3,B1,B2,B3));
subplot(211),imshow(I); %draw the Original Picture
subplot(212),imshow(B); %draw the result