How to find radial distortion coefficients between two sets of points? - matlab

Title changed from: Specific transformation/distortion of an image using checkerboard coordinates
I have two checkerboard images. One of them is slightly more distorted than the other. I think it's a type of "barrel" distortion. I'm trying to compute the radial distortion parameters (or generate camera parameters) in order to distort one of the images to look like the other so that the checkerboard corners will line up.
Here is the binary undistorted image with its corners plotted in blue o's and the reference coordinates of the corners that we need to distort the image to plotted in red o's.
Most of the distortion is happening around the edges and the corners. I believe this is a type of radial distortion. How do I find the radial distortion coefficients between the two sets of coordinates representing the checkerboard corners?
Link to Image A (undistorted): http://imgur.com/rg4PNvp
Link to Image B (distorted): http://imgur.com/a/BIvid
I need to transform the checkerboard from Image B to have its corners line up with the corners from image A.
I have tried modifying the MATLAB Camera Calibration app generated script (link). I changed the world points that would be used to estimate camera parameters to equal my world points (corners) from Image A. However, this wasn't successful. The code I tried is can be seen in this pastebin: https://pastebin.com/D0StCb0p I used the same image in the imageFileNames because estimateCameraParameters requires at least 2 sets of coordinates.
Code:
% Define images to process
imageFileNames = {'C:\Users\asavelyev\Pictures\checkerB.tif',...
'C:\Users\asavelyev\Pictures\checkerB.tif',...
};
% Detect checkerboards in images
[imagePoints, boardSize, imagesUsed] = detectCheckerboardPoints(imageFileNames);
imageFileNames = imageFileNames(imagesUsed);
% Read the first image to obtain image size
originalImage = imread(imageFileNames{1});
[mrows, ncols, ~] = size(originalImage);
% AS: change these worldPoints to points of RGB image...
% Generate world coordinates of the corners of the squares
worldPoints = detectCheckerboardPoints('C:\Users\asavelyev\Pictures\checkerA.tif');
% Calibrate the camera
[cameraParams, imagesUsed, estimationErrors] = estimateCameraParameters(imagePoints, worldPoints, ...
'EstimateSkew', false, 'EstimateTangentialDistortion', false, ...
'NumRadialDistortionCoefficients', 2, 'WorldUnits', 'millimeters', ...
'InitialIntrinsicMatrix', [], 'InitialRadialDistortion', [], ...
'ImageSize', [mrows, ncols]);
% For example, you can use the calibration data to remove effects of lens distortion.
undistortedImage = undistortImage(originalImage, cameraParams);

This was solved by using this post from Mathematics StackOverflow: https://math.stackexchange.com/questions/302093/how-to-calculate-the-lens-distortion-coefficients-with-a-known-displacement-vect
Main issue I oversaw was the normalizion of the data points used in computing the K coefficients used in radial distortion.
Here is the MATLAB script I wrote for finding these coefficients:
% input images should be black and white checkerboards already thresholded into a binary image
% output image are the K coefficients used in radial distortion
function K = CalculateRadialDistortion(DistortedImg, UndistortedImg)
distortedCorners = detectCheckerboardPoints(DistortedImg);
undistortedCorners = detectCheckerboardPoints(UndistortedImg);
% normalize data
X1 = distortedCorners(:,1) - size(DistortedImg, 2)/2;
Y1 = distortedCorners(:,2) - size(DistortedImg, 1)/2;
X1p = undistortedCorners(:,1) - size(DistortedImg, 2)/2;
Y1p = undistortedCorners(:,2) - size(DistortedImg, 1)/2;
% X1p = (1+k1*r^2 + k2*r^4)X1 where r^2 = X1^2 + Y1^2
Rsq = X1.^2 + Y1.^2;
Rquad = Rsq.^2;
Rsqd = cat(1, Rsq, Rsq);
Rquadd = cat(1, Rquad, Rquad);
R = cat(2, Rsqd, Rquadd);
X1poX1 = X1p ./ X1;
X1poX1 = X1poX1 - 1;
Y1poY1 = Y1p ./ Y1;
Y1poY1 = Y1poY1 - 1;
X1Y1 = cat(1, X1poX1, Y1poY1);
K = linsolve(R, X1Y1);
end

Related

How connect the coordinate and fill the area to create a binary mask?

I need to create a binary mask from a series of coordinates. My current code is shown below but the edges of the resulting image are not smooth. I think it is not precise and I need to make sure I am connecting exact coordinate and connect them together.
Here on the left side, I plotted the points (always 42 points) and on the right side is the output of the code. As you can see the edges are not smooth.
Here is the current code and the output: (coordinates are attached)
im is an image of size 112 x 112, filled with zero everywhere except the X, Y coordinates and inside the region filled with the 255.
function BW = mask_data(X,Y, im)
X = round(X);
Y = round(Y);
%round coordinates
X ( X < 1 ) = 1;
Y ( Y < 1 ) = 1;
BW = im > 255;
for p = 1:length(X)
BW(Y(p),X(p)) = 1;
end
BW = BW * 255;
BW = bwconvhull(BW);
BW = im2uint8(BW);
figure;
imshow(BW);
close all;
end
I believe the union convex hull is still your best bet. If you have images that are actually comprised of a single object then your shown algorithm should work just fine, though you are doing some redundant steps in your shown code. If that is not the case, then you may want to consider finding the convex hull of multiple components through adding the objects option to your bwconvhull call. If you strongly believe that the results are "not precise" then you may want to show an example image in which the algorithm actually fails.
As per the results not being smooth, you should logically not expect smooth boundaries for an image of size 112 x 112 with an object boundary similar to what you have shown. However, I would simply smooth the results if smooth images are preferred:
originalImage = imread('Adrress\to\your\image.png');
% To have the same image size as yours
originalImage = imresize(originalImage, [112 112]);
% Create a binary image
binaryImage = im2bw(originalImage);
% Create a binary convex hull image
UnionCH = bwconvhull(binaryImage);
% Smooth the results (note the change of binary class)
% Second arg (0.7) is the std dev of the Gaussian smoothing kernel
SmoothUnionCH = imgaussfilt(single(UnionCH), 0.7);
figure
subplot(131)
imshow(binaryImage)
title('Binary Image')
subplot(132)
imshow(UnionCH)
title('Binary Convex Hull Image')
subplot(133)
imshow(SmoothUnionCH,[])
title('Smooth Convex Hull Image')
You can adjust the size of the smoothing kernel of course. The results for the code above:

Count circle objects in an image using matlab

How to count circle objects in a bright image using MATLAB?
The input image is:
imfindcircles function can't find any circle in this image.
Based on well known image processing techniques, you can write your own processing tool:
img = imread('Mlj6r.jpg'); % read the image
imgGray = rgb2gray(img); % convert to grayscale
sigma = 1;
imgGray = imgaussfilt(imgGray, sigma); % filter the image (we will take derivatives, which are sensitive to noise)
imshow(imgGray) % show the image
[gx, gy] = gradient(double(imgGray)); % take the first derivative
[gxx, gxy] = gradient(gx); % take the second derivatives
[gxy, gyy] = gradient(gy); % take the second derivatives
k = 0.04; %0.04-0.15 (see wikipedia)
blob = (gxx.*gyy - gxy.*gxy - k*(gxx + gyy).^2); % Harris corner detector (high second derivatives in two perpendicular directions)
blob = blob .* (gxx < 0 & gyy < 0); % select the top of the corner (i.e. positive second derivative)
figure
imshow(blob) % show the blobs
blobThresshold = 1;
circles = imregionalmax(blob) & blob > blobThresshold; % find local maxima and apply a thresshold
figure
imshow(imgGray) % show the original image
hold on
[X, Y] = find(circles); % find the position of the circles
plot(Y, X, 'w.'); % plot the circle positions on top of the original figure
nCircles = length(X)
This code counts 2710 circles, which is probably a slight (but not so bad) overestimation.
The following figure shows the original image with the circle positions indicated as white dots. Some wrong detections are made at the border of the object. You can try to make some adjustments to the constants sigma, k and blobThresshold to obtain better results. In particular, higher k may be beneficial. See wikipedia, for more information about the Harris corner detector.

Sparse 3D reconstruction MATLAB example

I have a stereo camera system and I am trying this MATLAB's Computer Vision toolbox example (http://www.mathworks.com/help/vision/ug/sparse-3-d-reconstruction-from-multiple-views.html) with my own images and camera calibration files. I used Caltech's camera calibration toolbox (http://www.vision.caltech.edu/bouguetj/calib_doc/).
First I tried each camera separately based on first example and found the intrinsic camera calibration matrices for each camera and saved them. I also undistorted the left and right images using Caltech toolbox. Therefore I commented out the code for that from MATLAB example.
Here are the instrinsic camera matrices:
K1=[1050 0 630;0 1048 460;0 0 1];
K2=[1048 0 662;0 1047 468;0 0 1];
BTW, these are the right and center lenses from bumblebee XB3 cameras.
Question: aren't they supposed to be the same?
Then I did stereo calibration based on fifth example. I saved the rotation matrix (R) and translation matrix (T) from that. Therefore I commented out the code for that from MATLAB example.
Here are the rotation and translation matrices:
R=[0.9999 -0.0080 -0.0086;0.0080 1 0.0048;0.0086 -0.0049 1];
T=[120.14 0.55 1.04];
Then I fed all these images and calibration files and camera matrices to the MATLAB example and tried to find the 3-D point cloud but the results are not promising. I am attaching the code here. I think here are two problems:
1- My epipolar constraint value is too large!(to the power of 16)
2- I am not sure about the camera matrices and how I calculated them from R, and T from Caltech toolbox!
P.S. as far as feature extraction goes that is fine.
would be great if someone can help.
clear
close all
clc
files = {'Left1.tif';'Right1.tif'};
for i = 1:numel(files)
files{i}=fullfile('...\sparse_matlab', files{i});
images(i).image = imread(files{i});
end
figure;
montage(files); title('Pair of Original Images')
% Intrinsic camera parameters
load('Calib_Results_Left.mat')
K1 = KK;
load('Calib_Results_Right.mat')
K2 = KK;
%Extrinsics using stereo calibration
load('Calib_Results_stereo.mat')
Rotation=R;
Translation=T';
images(1).CameraMatrix=[Rotation; Translation] * K1;
images(2).CameraMatrix=[Rotation; Translation] * K2;
% Detect feature points and extract SURF descriptors in images
for i = 1:numel(images)
%detect SURF feature points
images(i).points = detectSURFFeatures(rgb2gray(images(i).image),...
'MetricThreshold',600);
%extract SURF descriptors
[images(i).featureVectors,images(i).points] = ...
extractFeatures(rgb2gray(images(i).image),images(i).points);
end
% Visualize several extracted SURF features from the Left image
figure; imshow(images(1).image);
title('1500 Strongest Feature Points from Globe01');
hold on;
plot(images(1).points.selectStrongest(1500));
indexPairs = ...
matchFeatures(images(1).featureVectors, images(2).featureVectors,...
'Prenormalized', true,'MaxRatio',0.4) ;
matchedPoints1 = images(1).points(indexPairs(:, 1));
matchedPoints2 = images(2).points(indexPairs(:, 2));
figure;
% Visualize correspondences
showMatchedFeatures(images(1).image,images(2).image,matchedPoints1,matchedPoints2,'montage' );
title('Original Matched Features from Globe01 and Globe02');
% Set a value near zero, It will be used to eliminate matches that
% correspond to points that do not lie on an epipolar line.
epipolarThreshold = .05;
for k = 1:length(matchedPoints1)
% Compute the fundamental matrix using the example helper function
% Evaluate the epipolar constraint
epipolarConstraint =[matchedPoints1.Location(k,:),1]...
*helperCameraMatricesToFMatrix(images(1).CameraMatrix,images(2).CameraMatrix)...
*[matchedPoints2.Location(k,:),1]';
%%%% here my epipolarConstraint results are bad %%%%%%%%%%%%%
% Only consider feature matches where the absolute value of the
% constraint expression is less than the threshold.
valid(k) = abs(epipolarConstraint) < epipolarThreshold;
end
validpts1 = images(1).points(indexPairs(valid, 1));
validpts2 = images(2).points(indexPairs(valid, 2));
figure;
showMatchedFeatures(images(1).image,images(2).image,validpts1,validpts2,'montage');
title('Matched Features After Applying Epipolar Constraint');
% convert image to double format for plotting
doubleimage = im2double(images(1).image);
points3D = ones(length(validpts1),4); % store homogeneous world coordinates
color = ones(length(validpts1),3); % store color information
% For all point correspondences
for i = 1:length(validpts1)
% For all image locations from a list of correspondences build an A
pointInImage1 = validpts1(i).Location;
pointInImage2 = validpts2(i).Location;
P1 = images(1).CameraMatrix'; % Transpose to match the convention in
P2 = images(2).CameraMatrix'; % in [1]
A = [
pointInImage1(1)*P1(3,:) - P1(1,:);...
pointInImage1(2)*P1(3,:) - P1(2,:);...
pointInImage2(1)*P2(3,:) - P2(1,:);...
pointInImage2(2)*P2(3,:) - P2(2,:)];
% Compute the 3-D location using the smallest singular value from the
% singular value decomposition of the matrix A
[~,~,V]=svd(A);
X = V(:,end);
X = X/X(end);
% Store location
points3D(i,:) = X';
% Store pixel color for visualization
y = round(pointInImage1(1));
x = round(pointInImage1(2));
color(i,:) = squeeze(doubleimage(x,y,:))';
end
% add green point representing the origin
points3D(end+1,:) = [0,0,0,1];
color(end+1,:) = [0,1,0];
% show images
figure('units','normalized','outerposition',[0 0 .5 .5])
subplot(1,2,1); montage(files,'Size',[1,2]); title('Original Images')
% plot point-cloud
hAxes = subplot(1,2,2); hold on; grid on;
scatter3(points3D(:,1),points3D(:,2),points3D(:,3),50,color,'fill')
xlabel('x-axis (mm)');ylabel('y-axis (mm)');zlabel('z-axis (mm)')
view(20,24);axis equal;axis vis3d
set(hAxes,'XAxisLocation','top','YAxisLocation','left',...
'ZDir','reverse','Ydir','reverse');
grid on
title('Reconstructed Point Cloud');
First of all, the Computer Vision System Toolbox now includes a Camera Calibrator App for calibrating a single camera, and also support for programmatic stereo camera calibration. It would be easier for you to use those tools, because the example you are using and the Caltech Calibration Toolbox use somewhat different conventions.
The example uses the pre-multiply convention, i.e. row vector * matrix, while the Caltech toolbox uses the post-multiply convention (matrix * column vector). That means that if you do use the camera parameters from Caltech, you would have to transpose the intrinsic matrix and the rotation matrices. That could be the main cause of your problems.
As far as the intrinsics being different between your two cameras, that is perfectly normal. All cameras are slightly different.
It would also help to see the matched features that you've used for triangulation. Given that you are reconstructing an elongated object, it doesn't seem too surprising to see the reconstructed points form a line in 3D...
You could also try rectifying the images and doing a dense reconstruction, as in the example I've linked to above.

Apply a gaussian distribution in a specific part of an image

I have for example the following image and a corresponding mask.
I would like to weight the pixels inside the white circle with a Gaussian, g = #(x,y,xc,yc) exp(-( ((x-xc)^2)/0.5 + ((y-yc)^2)/0.5 ));, placed in the centroid (xc,yc) of the mask. x, y are the coordinates of the corresponding pixels. Could you please someone suggest a way to do that without using for loops?
Thanks.
By "weighting" pixels inside the ellipse, I assume you mean multiply elementwise by a 2D gaussian. If so, here's the code:
% Read images
img = imread('img.jpg');
img = im2double(rgb2gray(img));
mask = imread('mask.jpg');
mask = im2double(rgb2gray(mask)) > 0.9; % JPG Compression resulted in some noise
% Gaussian function
g = #(x,y,xc,yc) exp(-(((x-xc).^2)/500+((y-yc).^2)./200)); % Should be modified to allow variances as parameters
% Use rp to get centroid and mask
rp_mask = regionprops(mask,'Centroid','BoundingBox','Image');
% Form coordinates
centroid = round(rp_mask.Centroid);
[coord_x coord_y] = meshgrid(ceil(rp_mask.BoundingBox(1)):ceil(rp_mask.BoundingBox(1))+rp_mask.BoundingBox(3)-1, ...
ceil(rp_mask.BoundingBox(2)):ceil(rp_mask.BoundingBox(2))+rp_mask.BoundingBox(4)-1);
% Get Gaussian Mask
gaussian_mask = g(coord_x,coord_y,centroid(1),centroid(2));
gaussian_mask(~rp_mask.Image) = 1; % Set values outside ROI to 1, this negates weighting outside ROI
% Apply Gaussian - Can use temp variables to make this shorter
img_g = img;
img_g(ceil(rp_mask.BoundingBox(2)):ceil(rp_mask.BoundingBox(2))+rp_mask.BoundingBox(4)-1, ...
ceil(rp_mask.BoundingBox(1)):ceil(rp_mask.BoundingBox(1))+rp_mask.BoundingBox(3)-1) = ...
img(ceil(rp_mask.BoundingBox(2)):ceil(rp_mask.BoundingBox(2))+rp_mask.BoundingBox(4)-1, ...
ceil(rp_mask.BoundingBox(1)):ceil(rp_mask.BoundingBox(1))+rp_mask.BoundingBox(3)-1) .* gaussian;
% Show
figure, imshow(img_g,[]);
The result:
If you instead want to perform some filtering within that roi, there's a function called roifilt2 which will allow you to filter the image within that region as well:
img_filt = roifilt2(fspecial('gaussian',[21 21],10),img,mask);
figure, imshow(img_filt,[]);
The result:

How can I implement a fisheye lens effect (barrel transformation) in MATLAB?

How can one implement the fisheye lens effect illustrated in that image:
One can use Google's logo for a try:
BTW, what's the term for it?
I believe this is typically referred to as either a "fisheye lens" effect or a "barrel transformation". Here are two links to demos that I found:
Sample code for how you can apply fisheye distortions to images using the 'custom' option for the function maketform from the Image Processing Toolbox.
An image processing demo which performs a barrel transformation using the function tformarray.
Example
In this example, I started with the function radial.m from the first link above and modified the way it relates points between the input and output spaces to create a nice circular image. The new function fisheye_inverse is given below, and it should be placed in a folder on your MATLAB path so you can use it later in this example:
function U = fisheye_inverse(X, T)
imageSize = T.tdata(1:2);
exponent = T.tdata(3);
origin = (imageSize+1)./2;
scale = imageSize./2;
x = (X(:, 1)-origin(1))/scale(1);
y = (X(:, 2)-origin(2))/scale(2);
R = sqrt(x.^2+y.^2);
theta = atan2(y, x);
cornerScale = min(abs(1./sin(theta)), abs(1./cos(theta)));
cornerScale(R < 1) = 1;
R = cornerScale.*R.^exponent;
x = scale(1).*R.*cos(theta)+origin(1);
y = scale(2).*R.*sin(theta)+origin(2);
U = [x y];
end
The fisheye distortion looks best when applied to square images, so you will want to make your images square by either cropping them or padding them with some color. Since the transformation of the image will not look right for indexed images, you will also want to convert any indexed images to RGB images using ind2rgb. Grayscale or binary images will also work fine. Here's how to do this for your sample Google logo:
[X, map] = imread('logo1w.png'); % Read the indexed image
rgbImage = ind2rgb(X, map); % Convert to an RGB image
[r, c, d] = size(rgbImage); % Get the image dimensions
nPad = (c-r)/2; % The number of padding rows
rgbImage = cat(1, ones(nPad, c, 3), rgbImage, ones(nPad, c, 3)); % Pad with white
Now we can create the transform with maketform and apply it with imtransform (or imwarp as recommended in newer versions):
options = [c c 3]; % An array containing the columns, rows, and exponent
tf = maketform('custom', 2, 2, [], ... % Make the transformation structure
#fisheye_inverse, options);
newImage = imtransform(rgbImage, tf); % Transform the image
imshow(newImage); % Display the image
And here's the image you should see:
You can adjust the degree of distortion by changing the third value in the options array, which is the exponential power used in the radial deformation of the image points.
I think you are referring to the fisheye lens effect. Here is some code for imitating fisheye in matlab.
Just for the record:
This effect is a type of radial distortion called "barrel distortion".
For more information please see:
http: //en.wikipedia.org/wiki/Distortion_(optics)
Here is a different method to apply an effect similar to barrel distortion using texture mapping (adapted from MATLAB Documentation):
[I,map] = imread('logo.gif');
[h,w] = size(I);
sphere;
hS = findobj('Type','surface');
hemisphere = [ones(h,w),I,ones(h,w)];
set(hS,'CData',flipud(hemisphere),...
'FaceColor','texturemap',...
'EdgeColor','none')
colormap(map)
colordef black
axis equal
grid off
set(gca,'xtick',[],'ztick',[],'ytick',[],'box','on')
view([90 0])
This will give you the circular frame you are looking for but the aliasing artifacts might be too much to deal with.