Related
I am working on creating bounding boxes upon images with my own created training dataset with the help of Detection, while I'm now stuck at the part of extracting the bounded image. I just want the image of the part inside the bounding box.
The input image to predicted.
The predicted image with the bounding box outlines.
Please help me with this query.The resultant image should be like this.
Detection Function in Tensorflow
# Detection Function
detections = detect_fn(input_tensor)
bscores = detections['detection_scores'][0].numpy()
bclasses = detections['detection_classes'][0].numpy().astype(np.int32)
bboxes = detections['detection_boxes'][0].numpy()
det_boxes, class_labels = ExtractBBoxes(bboxes, bclasses, bscores, im_width, im_height, image_name=image_file)
Method to extract and crop bounding box
def ExtractBBoxes(bboxes, bclasses, bscores, im_width, im_height, image_name):
bbox = []
class_labels = []
for idx in range(len(bboxes)):
if bscores[idx] >= Threshold:
#Region of Interest
y_min = int(bboxes[idx][0] * im_height)
x_min = int(bboxes[idx][1] * im_width)
y_max = int(bboxes[idx][2] * im_height)
x_max = int(bboxes[idx][3] * im_width)
class_label = category_index[int(bclasses[idx])]['name']
class_labels.append(class_label)
bbox.append([x_min, y_min, x_max, y_max, class_label, float(bscores[idx])])
#Crop Image
cropped_image = tf.image.crop_to_bounding_box(image, y_min, x_min, y_max - y_min, x_max - x_min).numpy().astype(np.int32)
output_image = tf.image.encode_jpeg(cropped_image) #For Jpeg
score = bscores[idx] * 100
# Create a constant as filename
file_name = tf.constant(youfilename)
file = tf.io.write_file(file_name, output_image)
I am detecting circles in an image. I return circle radii and X,Y of the axis. I know how to crop 1 circle no problem with formula:
X-radius, Y-radius, width=2*r,height=2*r using imcrop.
My problem is when I get returned more than 1 circle.
I get returned circle radii in an array radiiarray.
I get returned circle centers in centarray.
When i disp(centarray), It looks like this:
146.4930 144.4943
610.0317 142.1734
When I check size(centarray) and disp it i get:
2 2
So I understand first column is X and second is Y axis values. So first circle center would be 146,144.
I made a loop that works for only 1 circle. "-------" is where I'm unsure what to use to get:
note: radius = r
1st circle)
X = centarray(1)-r;
Y = centarray(3)-r;
Width =2*r;
Width =2*r;
2nd circle)
X = centarray(2);
Y = centarray(4);
Width =2*r;
Width =2*r;
How would I modify the "------" parts for my code? I also would like that if there are 3+ circles the loop would work as Im getting sometimes up to 9 circles from an image.
B = imread('p5.tif');
centarray = [];
centarray = [centarray,centers];
radiiarray = [];
radiiarray = [radiiarray,radii];
for j=1:length(radiiarray)
x = centarray((------))-radiiarray(j); %X value to crop
y = centarray((------))-radiiarray(j); %Y value to crop
width = 2*radiiarray(j); %WIDTH
height = 2*radiiarray(j); %HEIGHT
K = imcrop(B, [x y width height]);
end
My full code, which doesnt work, as I realized why when i saw the way values are stored...:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% DETECT + GET X Y WIDTH HEIGHT OF CIRCLES
I = imread('p5.tif');
subplot(2,2,1);imshow(I);title('Original Image');
%sharpen edges
B = imsharpen(I);
subplot(2,2,2);imshow(B);title('sharpened edges');
%find circles
Img = im2bw(B(:,:,3));
minRad = 20;
maxRad = 90;
[centers, radii] = imfindcircles(Img, [minRad maxRad], ...
'ObjectPolarity','bright','sensitivity',0.84);
imagesc(Img);
viscircles(centers, radii,'Color','green');
%nuber of circles found
%arrays to store values for radii and centers
centarray = [];
centarray = [centarray,centers];
radiiarray = [];
radiiarray = [radiiarray,radii];
sc = size(centarray);
disp(sc)
disp(centarray)
disp(radiiarray)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%CROP USING VALUE FROM ARRAYS NUMBER OF TIMES THERE ARE CENTERS(number of
%circles)
for j=1:length(radiiarray)
x = centarray((2*j)-1)-radiiarray(j); %X value to crop
y = centarray((2*j))-radiiarray(j); %Y value to crop
width = 2*radiiarray(j); %WIDTH
height = 2*radiiarray(j); %HEIGHT
disp(x)
disp(y)
disp(centarray)
%crop using values
K = imcrop(B, [x y width height]);
%togray
gray = rgb2gray(K);
subplot(2,2,3);imshow(K);title('cropped before bw');
Icorrected = imtophat(gray, strel('disk', 15));
%to black and white
black = im2bw(Icorrected);
subplot(2,2,4);imshow(black);title('sharpened edges');
%read
results = ocr(black);
number = results.Text;
%display value
disp(number)
end
Any help on how to create this kind of loop is appreciated as I just have no more ideas or cant find answer to this..
EDIT
SOLUTION
Hi, answer is to treat matrix as 2 dimensional.
for j=1:length(radiiarray)
x=centarray(j,1)
y=centarray(j,2)
width = radiiarray(j)
height = radiiarray(j)
end
as j increases values update correctly now.
answer is to treat matrix as 2 dimensional.
for j=1:length(radiiarray)
x=centarray(j,1)
y=centarray(j,2)
width = radiiarray(j)
height = radiiarray(j)
end
as j increases values update correctly now.
Thanks for #beaker for his comment! Thats why I figured it out
I have the following image and I would like to segment the rectangular object in the middle. I implemented the following code to segment but I cannot isolate the object. What functions or approaches can I take to isolate the rectangular object in the image?
im = imread('image.jpg');
% convert image to grayscale,
imHSV = rgb2hsv(im);
imGray = rgb2gray(im);
imSat = imHSV(:,:,2);
imHue = imHSV(:,:,1);
imVal = imHSV(:,:,3);
background = imopen(im,strel('disk',15));
I2 = im - background;
% detect edge using sobel algorithm
[~, threshold] = edge(imGray, 'sobel');
fudgeFactor = .5;
imEdge = edge(imGray,'sobel', threshold * fudgeFactor);
%figure, imshow(imEdge);
% split image into colour channels
redIM = im(:,:,1);
greenIM = im(:,:,2);
blueIM = im(:,:,3);
% convert image to binary image (using thresholding)
imBlobs = and((imSat < 0.6),(imHue < 0.6));
imBlobs = and(imBlobs, ((redIM + greenIM + blueIM) > 150));
imBlobs = imfill(~imBlobs,4);
imBlobs = bwareaopen(imBlobs,50);
figure,imshow(imBlobs);
In this example, you can leverage the fact that the rectangle contains blue in all of its corners in order to build a good initial mask.
Use threshold in order to locate the blue locations in the image and create an initial mask.
Given this initial mask, find its corners using min and max operations.
Connect between the corners with lines in order to receive a rectangle.
Fill the rectangle using imfill.
Code example:
% convert image to binary image (using thresholding)
redIM = im(:,:,1);
greenIM = im(:,:,2);
blueIM = im(:,:,3);
mask = blueIM > redIM*2 & blueIM > greenIM*2;
%noise cleaning
mask = imopen(mask,strel('disk',3));
%find the corners of the rectangle
[Y, X] = ind2sub(size(mask),find(mask));
minYCoords = find(Y==min(Y));
maxYCoords = find(Y==max(Y));
minXCoords = find(X==min(X));
maxXCoords = find(X==max(X));
%top corners
topRightInd = find(X(minYCoords)==max(X(minYCoords)),1,'last');
topLeftInd = find(Y(minXCoords)==min(Y(minXCoords)),1,'last');
p1 = [Y(minYCoords(topRightInd)) X((minYCoords(topRightInd)))];
p2 = [Y(minXCoords(topLeftInd)) X((minXCoords(topLeftInd)))];
%bottom corners
bottomRightInd = find(Y(maxXCoords)==max(Y(maxXCoords)),1,'last');
bottomLeftInd = find(X(minYCoords)==min(X(minYCoords)),1,'last');
p3 = [Y(maxXCoords(bottomRightInd)) X((maxXCoords(bottomRightInd)))];
p4 = [Y(maxYCoords(bottomLeftInd)) X((maxYCoords(bottomLeftInd)))];
%connect between the corners with lines
l1Inds = drawline(p1,p2,size(mask));
l2Inds = drawline(p3,p4,size(mask));
maskOut = mask;
maskOut([l1Inds,l2Inds]) = 1;
%fill the rectangle which was created
midP = ceil((p1+p2+p3+p4)./4);
maskOut = imfill(maskOut,midP);
%present the final result
figure,imshow(maskOut);
Final Result:
Intermediate results (1-after threshold taking, 2-after adding lines):
*drawline function is taken from drawline webpage
GOAL:
In this project, we calculate the gravitational forces exerted by each body in a solar system on the other bodies in the system. Based upon those forces and initial position/velocities of the bodies, one can predict their motion using Newton’s second law. We assume that the following information is given:
The masses of all bodies involved in a solar system.
The positions and velocities of all planets at a given time.
Then we can calculate the gravitational forces using Newton’s law of universal gravitation. Each body is acted upon by gravitational forces from all the other bodies, and therefore the total force on the body is the sum of the forces induced by all the other bodies. Once the force on a body is known, the acceleration of each body can be determined using Newton’s second law of motion. Based upon this acceleration at any time t, we can calculate the new velocity and position of the bodies at time t + ∆t. We start with known positions and velocities at time t = 0, then calculate those at ∆t, then at 2∆t and so on.
ASSIGNMENT:
Write a MATLAB function that takes as input a struct array (where each element represents one body in your solar system), a time step ∆t and a final time T. Each struct in the struct array should have the following fields:
name: A string that holds the name of the planet.
x: The x-coordinate of the initial position of the planet.
y: The y-coordinate of the initial position of the planet.
z: The z-coordinate of the initial position of the planet.
vx: The x-component of the initial velocity of the planet.
vy: The y-component of the initial velocity of the planet.
vz: The z-component of the initial velocity of the planet.
Then your function should use the model of a gravitational system described above to compute the positions and velocities of all planets at all times. Your function should return these values in an appropriate way and also produce a plot of the positions of all planets in a 3D plot. The plot should at least contain a title and a legend. The legend should use the field name in order to name each planet. Further, each planet should be shown in a different color where the color is determined randomly. A sample plot is given in Figure 1.
MY ERROR
There is an error in my last loop with time in it, has n=1:T/t; regarding matrix dimensions. I'm trying to calculate new forces using the updated positions from each iteration, however I am getting and error, and my plot shows a bunch of straight lines, and not a solar system orbit.**
function [x, y, z, vx, vy, vz] = solarsystemsim(F,t,T)
m = size(F,2);
G = 6.67384e-11;
j=1;
x = 1:10;
% r = zeros(m-1,1);
% gF = zeros(m-1,1);
for(i=1:m)
for(j=1:m)
r(i,j) = distform2(F(i).x,F(j).x,F(i).y,F(j).y,F(i).z,F(j).z);
gF(i,j) = (G*(F(i).mass)*(F(j).mass))/((r(i,j))^2);
gFx(i,j) = -((gF(i,j))*(F(i).x-F(j).x))/(r(i,j));
gFy(i,j) = -((gF(i,j))*(F(i).y-F(j).y))/(r(i,j));
gFz(i,j) = -((gF(i,j))*(F(i).z-F(j).z))/(r(i,j));
if(i==j)
gF(i,j) = 0;
gFx(i,j) = 0;
gFy(i,j) = 0;
gFz(i,j) = 0;
end
end
end
for(i=1:m)
gFxT(i) = sum(gFx(i,:));
gFyT(i) = sum(gFy(i,:));
gFzT(i) = sum(gFz(i,:));
end
tn = 0;
n = 1;
x = zeros(T/t,m);
y = zeros(T/t,m);
z = zeros(T/t,m);
vx = zeros(T/t,m);
vy = zeros(T/t,m);
vz = zeros(T/t,m);
for(i=1:m)
vx(1,i) = F(i).vx;
vy(1,i) = F(i).vy;
vz(1,i) = F(i).vz;
x(1,i) = F(i).x;
y(1,i) = F(i).y;
z(1,i) = F(i).z;
end
for(n=1:T/t)
for(i=1:m)
for(j=1:m)
r(i+1,j+1) = distform2(x(i,j),x(i,j),y(i,j),y(i,j),z(i,j),z(i,j));
gF(i+1,j+1) = (G*(F(i).mass)*(F(j).mass))/((r(i,j))^2);
gFx(i+1,j+1) = -((gF(i,j))*(x(i,j)-x(i,j+1)))/(r(i,j));
gFy(i+1,j+1) = -((gF(i,j))*(y(i,j)-y(i,j+1)))/(r(i,j));
gFz(i+1,j+1) = -((gF(i,j))*(z(i,j)-z(i,j+1)))/(r(i,j));
if(i==j)
gF(i,j) = 0;
gFx(i,j) = 0;
gFy(i,j) = 0;
gFz(i,j) = 0;
end
end
end
gFxT(i) = sum(gFx(i,:));
gFyT(i) = sum(gFy(i,:));
gFzT(i) = sum(gFz(i,:));
for(i=1:T/t)
for(j=1:m)
vx(i,j) = vx(i,j) + t*(gFxT(j))/(F(j).mass);
vy(i,j) = vy(i,j) + t*(gFyT(j))/(F(j).mass);
vz(i,j) = vz(i,j) + t*(gFzT(j))/(F(j).mass);
x(i,j) = x(i,j) + t*(vx(j));
y(i,j) = y(i,j) + t*(vy(j));
z(i,j) = z(i,j) + t*(vz(j));
end
end
plot3(x,y,z);
end
F is not a matrix and so you can't index it using parentheses. It's a cell array and must be indexed using braces {}
EDITED to correct struct to cell as per lmillefiori's comment
BOUNTY STATUS UPDATE:
I discovered how to map a linear lens, from destination coordinates to source coordinates.
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
1). I actually struggle to reverse it, and to map source coordinates to destination coordinates. What is the inverse, in code in the style of the converting functions I posted?
2). I also see that my undistortion is imperfect on some lenses - presumably those that are not strictly linear. What is the equivalent to-and-from source-and-destination coordinates for those lenses? Again, more code than just mathematical formulae please...
Question as originally stated:
I have some points that describe positions in a picture taken with a fisheye lens.
I want to convert these points to rectilinear coordinates. I want to undistort the image.
I've found this description of how to generate a fisheye effect, but not how to reverse it.
There's also a blog post that describes how to use tools to do it; these pictures are from that:
(1) : SOURCE Original photo link
Input : Original image with fish-eye distortion to fix.
(2) : DESTINATION Original photo link
Output : Corrected image (technically also with perspective correction, but that's a separate step).
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
My function stub looks like this:
Point correct_fisheye(const Point& p,const Size& img) {
// to polar
const Point centre = {img.width/2,img.height/2};
const Point rel = {p.x-centre.x,p.y-centre.y};
const double theta = atan2(rel.y,rel.x);
double R = sqrt((rel.x*rel.x)+(rel.y*rel.y));
// fisheye undistortion in here please
//... change R ...
// back to rectangular
const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta));
fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y);
return ret;
}
Alternatively, I could somehow convert the image from fisheye to rectilinear before finding the points, but I'm completely befuddled by the OpenCV documentation. Is there a straightforward way to do it in OpenCV, and does it perform well enough to do it to a live video feed?
The description you mention states that the projection by a pin-hole camera (one that does not introduce lens distortion) is modeled by
R_u = f*tan(theta)
and the projection by common fisheye lens cameras (that is, distorted) is modeled by
R_d = 2*f*sin(theta/2)
You already know R_d and theta and if you knew the camera's focal length (represented by f) then correcting the image would amount to computing R_u in terms of R_d and theta. In other words,
R_u = f*tan(2*asin(R_d/(2*f)))
is the formula you're looking for. Estimating the focal length f can be solved by calibrating the camera or other means such as letting the user provide feedback on how well the image is corrected or using knowledge from the original scene.
In order to solve the same problem using OpenCV, you would have to obtain the camera's intrinsic parameters and lens distortion coefficients. See, for example, Chapter 11 of Learning OpenCV (don't forget to check the correction). Then you can use a program such as this one (written with the Python bindings for OpenCV) in order to reverse lens distortion:
#!/usr/bin/python
# ./undistort 0_0000.jpg 1367.451167 1367.451167 0 0 -0.246065 0.193617 -0.002004 -0.002056
import sys
import cv
def main(argv):
if len(argv) < 10:
print 'Usage: %s input-file fx fy cx cy k1 k2 p1 p2 output-file' % argv[0]
sys.exit(-1)
src = argv[1]
fx, fy, cx, cy, k1, k2, p1, p2, output = argv[2:]
intrinsics = cv.CreateMat(3, 3, cv.CV_64FC1)
cv.Zero(intrinsics)
intrinsics[0, 0] = float(fx)
intrinsics[1, 1] = float(fy)
intrinsics[2, 2] = 1.0
intrinsics[0, 2] = float(cx)
intrinsics[1, 2] = float(cy)
dist_coeffs = cv.CreateMat(1, 4, cv.CV_64FC1)
cv.Zero(dist_coeffs)
dist_coeffs[0, 0] = float(k1)
dist_coeffs[0, 1] = float(k2)
dist_coeffs[0, 2] = float(p1)
dist_coeffs[0, 3] = float(p2)
src = cv.LoadImage(src)
dst = cv.CreateImage(cv.GetSize(src), src.depth, src.nChannels)
mapx = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
mapy = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
cv.InitUndistortMap(intrinsics, dist_coeffs, mapx, mapy)
cv.Remap(src, dst, mapx, mapy, cv.CV_INTER_LINEAR + cv.CV_WARP_FILL_OUTLIERS, cv.ScalarAll(0))
# cv.Undistort2(src, dst, intrinsics, dist_coeffs)
cv.SaveImage(output, dst)
if __name__ == '__main__':
main(sys.argv)
Also note that OpenCV uses a very different lens distortion model to the one in the web page you linked to.
(Original poster, providing an alternative)
The following function maps destination (rectilinear) coordinates to source (fisheye-distorted) coordinates. (I'd appreciate help in reversing it)
I got to this point through trial-and-error: I don't fundamentally grasp why this code is working, explanations and improved accuracy appreciated!
def dist(x,y):
return sqrt(x*x+y*y)
def correct_fisheye(src_size,dest_size,dx,dy,factor):
""" returns a tuple of source coordinates (sx,sy)
(note: values can be out of range)"""
# convert dx,dy to relative coordinates
rx, ry = dx-(dest_size[0]/2), dy-(dest_size[1]/2)
# calc theta
r = dist(rx,ry)/(dist(src_size[0],src_size[1])/factor)
if 0==r:
theta = 1.0
else:
theta = atan(r)/r
# back to absolute coordinates
sx, sy = (src_size[0]/2)+theta*rx, (src_size[1]/2)+theta*ry
# done
return (int(round(sx)),int(round(sy)))
When used with a factor of 3.0, it successfully undistorts the images used as examples (I made no attempt at quality interpolation):
Dead link
(And this is from the blog post, for comparison:)
If you think your formulas are exact, you can comput an exact formula with trig, like so:
Rin = 2 f sin(w/2) -> sin(w/2)= Rin/2f
Rout= f tan(w) -> tan(w)= Rout/f
(Rin/2f)^2 = [sin(w/2)]^2 = (1 - cos(w))/2 -> cos(w) = 1 - 2(Rin/2f)^2
(Rout/f)^2 = [tan(w)]^2 = 1/[cos(w)]^2 - 1
-> (Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1
However, as #jmbr says, the actual camera distortion will depend on the lens and the zoom. Rather than rely on a fixed formula, you might want to try a polynomial expansion:
Rout = Rin*(1 + A*Rin^2 + B*Rin^4 + ...)
By tweaking first A, then higher-order coefficients, you can compute any reasonable local function (the form of the expansion takes advantage of the symmetry of the problem). In particular, it should be possible to compute initial coefficients to approximate the theoretical function above.
Also, for good results, you will need to use an interpolation filter to generate your corrected image. As long as the distortion is not too great, you can use the kind of filter you would use to rescale the image linearly without much problem.
Edit: as per your request, the equivalent scaling factor for the above formula:
(Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1
-> Rout/f = [Rin/f] * sqrt(1-[Rin/f]^2/4)/(1-[Rin/f]^2/2)
If you plot the above formula alongside tan(Rin/f), you can see that they are very similar in shape. Basically, distortion from the tangent becomes severe before sin(w) becomes much different from w.
The inverse formula should be something like:
Rin/f = [Rout/f] / sqrt( sqrt(([Rout/f]^2+1) * (sqrt([Rout/f]^2+1) + 1) / 2 )
I blindly implemented the formulas from here, so I cannot guarantee it would do what you need.
Use auto_zoom to get the value for the zoom parameter.
def dist(x,y):
return sqrt(x*x+y*y)
def fisheye_to_rectilinear(src_size,dest_size,sx,sy,crop_factor,zoom):
""" returns a tuple of dest coordinates (dx,dy)
(note: values can be out of range)
crop_factor is ratio of sphere diameter to diagonal of the source image"""
# convert sx,sy to relative coordinates
rx, ry = sx-(src_size[0]/2), sy-(src_size[1]/2)
r = dist(rx,ry)
# focal distance = radius of the sphere
pi = 3.1415926535
f = dist(src_size[0],src_size[1])*factor/pi
# calc theta 1) linear mapping (older Nikon)
theta = r / f
# calc theta 2) nonlinear mapping
# theta = asin ( r / ( 2 * f ) ) * 2
# calc new radius
nr = tan(theta) * zoom
# back to absolute coordinates
dx, dy = (dest_size[0]/2)+rx/r*nr, (dest_size[1]/2)+ry/r*nr
# done
return (int(round(dx)),int(round(dy)))
def fisheye_auto_zoom(src_size,dest_size,crop_factor):
""" calculate zoom such that left edge of source image matches left edge of dest image """
# Try to see what happens with zoom=1
dx, dy = fisheye_to_rectilinear(src_size, dest_size, 0, src_size[1]/2, crop_factor, 1)
# Calculate zoom so the result is what we wanted
obtained_r = dest_size[0]/2 - dx
required_r = dest_size[0]/2
zoom = required_r / obtained_r
return zoom
I took what JMBR did and basically reversed it. He took the radius of the distorted image (Rd, that is, the distance in pixels from the center of the image) and found a formula for Ru, the radius of the undistorted image.
You want to go the other way. For each pixel in the undistorted (processed image), you want to know what the corresponding pixel is in the distorted image.
In other words, given (xu, yu) --> (xd, yd). You then replace each pixel in the undistorted image with its corresponding pixel from the distorted image.
Starting where JMBR did, I do the reverse, finding Rd as a function of Ru. I get:
Rd = f * sqrt(2) * sqrt( 1 - 1/sqrt(r^2 +1))
where f is the focal length in pixels (I'll explain later), and r = Ru/f.
The focal length for my camera was 2.5 mm. The size of each pixel on my CCD was 6 um square. f was therefore 2500/6 = 417 pixels. This can be found by trial and error.
Finding Rd allows you to find the corresponding pixel in the distorted image using polar coordinates.
The angle of each pixel from the center point is the same:
theta = arctan( (yu-yc)/(xu-xc) ) where xc, yc are the center points.
Then,
xd = Rd * cos(theta) + xc
yd = Rd * sin(theta) + yc
Make sure you know which quadrant you are in.
Here is the C# code I used
public class Analyzer
{
private ArrayList mFisheyeCorrect;
private int mFELimit = 1500;
private double mScaleFESize = 0.9;
public Analyzer()
{
//A lookup table so we don't have to calculate Rdistorted over and over
//The values will be multiplied by focal length in pixels to
//get the Rdistorted
mFisheyeCorrect = new ArrayList(mFELimit);
//i corresponds to Rundist/focalLengthInPixels * 1000 (to get integers)
for (int i = 0; i < mFELimit; i++)
{
double result = Math.Sqrt(1 - 1 / Math.Sqrt(1.0 + (double)i * i / 1000000.0)) * 1.4142136;
mFisheyeCorrect.Add(result);
}
}
public Bitmap RemoveFisheye(ref Bitmap aImage, double aFocalLinPixels)
{
Bitmap correctedImage = new Bitmap(aImage.Width, aImage.Height);
//The center points of the image
double xc = aImage.Width / 2.0;
double yc = aImage.Height / 2.0;
Boolean xpos, ypos;
//Move through the pixels in the corrected image;
//set to corresponding pixels in distorted image
for (int i = 0; i < correctedImage.Width; i++)
{
for (int j = 0; j < correctedImage.Height; j++)
{
//which quadrant are we in?
xpos = i > xc;
ypos = j > yc;
//Find the distance from the center
double xdif = i-xc;
double ydif = j-yc;
//The distance squared
double Rusquare = xdif * xdif + ydif * ydif;
//the angle from the center
double theta = Math.Atan2(ydif, xdif);
//find index for lookup table
int index = (int)(Math.Sqrt(Rusquare) / aFocalLinPixels * 1000);
if (index >= mFELimit) index = mFELimit - 1;
//calculated Rdistorted
double Rd = aFocalLinPixels * (double)mFisheyeCorrect[index]
/mScaleFESize;
//calculate x and y distances
double xdelta = Math.Abs(Rd*Math.Cos(theta));
double ydelta = Math.Abs(Rd * Math.Sin(theta));
//convert to pixel coordinates
int xd = (int)(xc + (xpos ? xdelta : -xdelta));
int yd = (int)(yc + (ypos ? ydelta : -ydelta));
xd = Math.Max(0, Math.Min(xd, aImage.Width-1));
yd = Math.Max(0, Math.Min(yd, aImage.Height-1));
//set the corrected pixel value from the distorted image
correctedImage.SetPixel(i, j, aImage.GetPixel(xd, yd));
}
}
return correctedImage;
}
}
I found this pdf file and I have proved that the maths are correct (except for the line vd = *xd**fv+v0 which should say vd = **yd**+fv+v0).
http://perception.inrialpes.fr/CAVA_Dataset/Site/files/Calibration_OpenCV.pdf
It does not use all of the latest co-efficients that OpenCV has available but I am sure that it could be adapted fairly easily.
double k1 = cameraIntrinsic.distortion[0];
double k2 = cameraIntrinsic.distortion[1];
double p1 = cameraIntrinsic.distortion[2];
double p2 = cameraIntrinsic.distortion[3];
double k3 = cameraIntrinsic.distortion[4];
double fu = cameraIntrinsic.focalLength[0];
double fv = cameraIntrinsic.focalLength[1];
double u0 = cameraIntrinsic.principalPoint[0];
double v0 = cameraIntrinsic.principalPoint[1];
double u, v;
u = thisPoint->x; // the undistorted point
v = thisPoint->y;
double x = ( u - u0 )/fu;
double y = ( v - v0 )/fv;
double r2 = (x*x) + (y*y);
double r4 = r2*r2;
double cDist = 1 + (k1*r2) + (k2*r4);
double xr = x*cDist;
double yr = y*cDist;
double a1 = 2*x*y;
double a2 = r2 + (2*(x*x));
double a3 = r2 + (2*(y*y));
double dx = (a1*p1) + (a2*p2);
double dy = (a3*p1) + (a1*p2);
double xd = xr + dx;
double yd = yr + dy;
double ud = (xd*fu) + u0;
double vd = (yd*fv) + v0;
thisPoint->x = ud; // the distorted point
thisPoint->y = vd;
This can be solved as an optimization problem. Simply draw on curves in images that are supposed to be straight lines. Store the contour points for each of those curves. Now we can solve the fish eye matrix as a minimization problem. Minimize the curve in points and that will give us a fisheye matrix. It works.
It can be done manually by adjusting the fish eye matrix using trackbars! Here is a fish eye GUI code using OpenCV for manual calibration.