BOUNTY STATUS UPDATE:
I discovered how to map a linear lens, from destination coordinates to source coordinates.
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
1). I actually struggle to reverse it, and to map source coordinates to destination coordinates. What is the inverse, in code in the style of the converting functions I posted?
2). I also see that my undistortion is imperfect on some lenses - presumably those that are not strictly linear. What is the equivalent to-and-from source-and-destination coordinates for those lenses? Again, more code than just mathematical formulae please...
Question as originally stated:
I have some points that describe positions in a picture taken with a fisheye lens.
I want to convert these points to rectilinear coordinates. I want to undistort the image.
I've found this description of how to generate a fisheye effect, but not how to reverse it.
There's also a blog post that describes how to use tools to do it; these pictures are from that:
(1) : SOURCE Original photo link
Input : Original image with fish-eye distortion to fix.
(2) : DESTINATION Original photo link
Output : Corrected image (technically also with perspective correction, but that's a separate step).
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
My function stub looks like this:
Point correct_fisheye(const Point& p,const Size& img) {
// to polar
const Point centre = {img.width/2,img.height/2};
const Point rel = {p.x-centre.x,p.y-centre.y};
const double theta = atan2(rel.y,rel.x);
double R = sqrt((rel.x*rel.x)+(rel.y*rel.y));
// fisheye undistortion in here please
//... change R ...
// back to rectangular
const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta));
fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y);
return ret;
}
Alternatively, I could somehow convert the image from fisheye to rectilinear before finding the points, but I'm completely befuddled by the OpenCV documentation. Is there a straightforward way to do it in OpenCV, and does it perform well enough to do it to a live video feed?
The description you mention states that the projection by a pin-hole camera (one that does not introduce lens distortion) is modeled by
R_u = f*tan(theta)
and the projection by common fisheye lens cameras (that is, distorted) is modeled by
R_d = 2*f*sin(theta/2)
You already know R_d and theta and if you knew the camera's focal length (represented by f) then correcting the image would amount to computing R_u in terms of R_d and theta. In other words,
R_u = f*tan(2*asin(R_d/(2*f)))
is the formula you're looking for. Estimating the focal length f can be solved by calibrating the camera or other means such as letting the user provide feedback on how well the image is corrected or using knowledge from the original scene.
In order to solve the same problem using OpenCV, you would have to obtain the camera's intrinsic parameters and lens distortion coefficients. See, for example, Chapter 11 of Learning OpenCV (don't forget to check the correction). Then you can use a program such as this one (written with the Python bindings for OpenCV) in order to reverse lens distortion:
#!/usr/bin/python
# ./undistort 0_0000.jpg 1367.451167 1367.451167 0 0 -0.246065 0.193617 -0.002004 -0.002056
import sys
import cv
def main(argv):
if len(argv) < 10:
print 'Usage: %s input-file fx fy cx cy k1 k2 p1 p2 output-file' % argv[0]
sys.exit(-1)
src = argv[1]
fx, fy, cx, cy, k1, k2, p1, p2, output = argv[2:]
intrinsics = cv.CreateMat(3, 3, cv.CV_64FC1)
cv.Zero(intrinsics)
intrinsics[0, 0] = float(fx)
intrinsics[1, 1] = float(fy)
intrinsics[2, 2] = 1.0
intrinsics[0, 2] = float(cx)
intrinsics[1, 2] = float(cy)
dist_coeffs = cv.CreateMat(1, 4, cv.CV_64FC1)
cv.Zero(dist_coeffs)
dist_coeffs[0, 0] = float(k1)
dist_coeffs[0, 1] = float(k2)
dist_coeffs[0, 2] = float(p1)
dist_coeffs[0, 3] = float(p2)
src = cv.LoadImage(src)
dst = cv.CreateImage(cv.GetSize(src), src.depth, src.nChannels)
mapx = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
mapy = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
cv.InitUndistortMap(intrinsics, dist_coeffs, mapx, mapy)
cv.Remap(src, dst, mapx, mapy, cv.CV_INTER_LINEAR + cv.CV_WARP_FILL_OUTLIERS, cv.ScalarAll(0))
# cv.Undistort2(src, dst, intrinsics, dist_coeffs)
cv.SaveImage(output, dst)
if __name__ == '__main__':
main(sys.argv)
Also note that OpenCV uses a very different lens distortion model to the one in the web page you linked to.
(Original poster, providing an alternative)
The following function maps destination (rectilinear) coordinates to source (fisheye-distorted) coordinates. (I'd appreciate help in reversing it)
I got to this point through trial-and-error: I don't fundamentally grasp why this code is working, explanations and improved accuracy appreciated!
def dist(x,y):
return sqrt(x*x+y*y)
def correct_fisheye(src_size,dest_size,dx,dy,factor):
""" returns a tuple of source coordinates (sx,sy)
(note: values can be out of range)"""
# convert dx,dy to relative coordinates
rx, ry = dx-(dest_size[0]/2), dy-(dest_size[1]/2)
# calc theta
r = dist(rx,ry)/(dist(src_size[0],src_size[1])/factor)
if 0==r:
theta = 1.0
else:
theta = atan(r)/r
# back to absolute coordinates
sx, sy = (src_size[0]/2)+theta*rx, (src_size[1]/2)+theta*ry
# done
return (int(round(sx)),int(round(sy)))
When used with a factor of 3.0, it successfully undistorts the images used as examples (I made no attempt at quality interpolation):
Dead link
(And this is from the blog post, for comparison:)
If you think your formulas are exact, you can comput an exact formula with trig, like so:
Rin = 2 f sin(w/2) -> sin(w/2)= Rin/2f
Rout= f tan(w) -> tan(w)= Rout/f
(Rin/2f)^2 = [sin(w/2)]^2 = (1 - cos(w))/2 -> cos(w) = 1 - 2(Rin/2f)^2
(Rout/f)^2 = [tan(w)]^2 = 1/[cos(w)]^2 - 1
-> (Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1
However, as #jmbr says, the actual camera distortion will depend on the lens and the zoom. Rather than rely on a fixed formula, you might want to try a polynomial expansion:
Rout = Rin*(1 + A*Rin^2 + B*Rin^4 + ...)
By tweaking first A, then higher-order coefficients, you can compute any reasonable local function (the form of the expansion takes advantage of the symmetry of the problem). In particular, it should be possible to compute initial coefficients to approximate the theoretical function above.
Also, for good results, you will need to use an interpolation filter to generate your corrected image. As long as the distortion is not too great, you can use the kind of filter you would use to rescale the image linearly without much problem.
Edit: as per your request, the equivalent scaling factor for the above formula:
(Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1
-> Rout/f = [Rin/f] * sqrt(1-[Rin/f]^2/4)/(1-[Rin/f]^2/2)
If you plot the above formula alongside tan(Rin/f), you can see that they are very similar in shape. Basically, distortion from the tangent becomes severe before sin(w) becomes much different from w.
The inverse formula should be something like:
Rin/f = [Rout/f] / sqrt( sqrt(([Rout/f]^2+1) * (sqrt([Rout/f]^2+1) + 1) / 2 )
I blindly implemented the formulas from here, so I cannot guarantee it would do what you need.
Use auto_zoom to get the value for the zoom parameter.
def dist(x,y):
return sqrt(x*x+y*y)
def fisheye_to_rectilinear(src_size,dest_size,sx,sy,crop_factor,zoom):
""" returns a tuple of dest coordinates (dx,dy)
(note: values can be out of range)
crop_factor is ratio of sphere diameter to diagonal of the source image"""
# convert sx,sy to relative coordinates
rx, ry = sx-(src_size[0]/2), sy-(src_size[1]/2)
r = dist(rx,ry)
# focal distance = radius of the sphere
pi = 3.1415926535
f = dist(src_size[0],src_size[1])*factor/pi
# calc theta 1) linear mapping (older Nikon)
theta = r / f
# calc theta 2) nonlinear mapping
# theta = asin ( r / ( 2 * f ) ) * 2
# calc new radius
nr = tan(theta) * zoom
# back to absolute coordinates
dx, dy = (dest_size[0]/2)+rx/r*nr, (dest_size[1]/2)+ry/r*nr
# done
return (int(round(dx)),int(round(dy)))
def fisheye_auto_zoom(src_size,dest_size,crop_factor):
""" calculate zoom such that left edge of source image matches left edge of dest image """
# Try to see what happens with zoom=1
dx, dy = fisheye_to_rectilinear(src_size, dest_size, 0, src_size[1]/2, crop_factor, 1)
# Calculate zoom so the result is what we wanted
obtained_r = dest_size[0]/2 - dx
required_r = dest_size[0]/2
zoom = required_r / obtained_r
return zoom
I took what JMBR did and basically reversed it. He took the radius of the distorted image (Rd, that is, the distance in pixels from the center of the image) and found a formula for Ru, the radius of the undistorted image.
You want to go the other way. For each pixel in the undistorted (processed image), you want to know what the corresponding pixel is in the distorted image.
In other words, given (xu, yu) --> (xd, yd). You then replace each pixel in the undistorted image with its corresponding pixel from the distorted image.
Starting where JMBR did, I do the reverse, finding Rd as a function of Ru. I get:
Rd = f * sqrt(2) * sqrt( 1 - 1/sqrt(r^2 +1))
where f is the focal length in pixels (I'll explain later), and r = Ru/f.
The focal length for my camera was 2.5 mm. The size of each pixel on my CCD was 6 um square. f was therefore 2500/6 = 417 pixels. This can be found by trial and error.
Finding Rd allows you to find the corresponding pixel in the distorted image using polar coordinates.
The angle of each pixel from the center point is the same:
theta = arctan( (yu-yc)/(xu-xc) ) where xc, yc are the center points.
Then,
xd = Rd * cos(theta) + xc
yd = Rd * sin(theta) + yc
Make sure you know which quadrant you are in.
Here is the C# code I used
public class Analyzer
{
private ArrayList mFisheyeCorrect;
private int mFELimit = 1500;
private double mScaleFESize = 0.9;
public Analyzer()
{
//A lookup table so we don't have to calculate Rdistorted over and over
//The values will be multiplied by focal length in pixels to
//get the Rdistorted
mFisheyeCorrect = new ArrayList(mFELimit);
//i corresponds to Rundist/focalLengthInPixels * 1000 (to get integers)
for (int i = 0; i < mFELimit; i++)
{
double result = Math.Sqrt(1 - 1 / Math.Sqrt(1.0 + (double)i * i / 1000000.0)) * 1.4142136;
mFisheyeCorrect.Add(result);
}
}
public Bitmap RemoveFisheye(ref Bitmap aImage, double aFocalLinPixels)
{
Bitmap correctedImage = new Bitmap(aImage.Width, aImage.Height);
//The center points of the image
double xc = aImage.Width / 2.0;
double yc = aImage.Height / 2.0;
Boolean xpos, ypos;
//Move through the pixels in the corrected image;
//set to corresponding pixels in distorted image
for (int i = 0; i < correctedImage.Width; i++)
{
for (int j = 0; j < correctedImage.Height; j++)
{
//which quadrant are we in?
xpos = i > xc;
ypos = j > yc;
//Find the distance from the center
double xdif = i-xc;
double ydif = j-yc;
//The distance squared
double Rusquare = xdif * xdif + ydif * ydif;
//the angle from the center
double theta = Math.Atan2(ydif, xdif);
//find index for lookup table
int index = (int)(Math.Sqrt(Rusquare) / aFocalLinPixels * 1000);
if (index >= mFELimit) index = mFELimit - 1;
//calculated Rdistorted
double Rd = aFocalLinPixels * (double)mFisheyeCorrect[index]
/mScaleFESize;
//calculate x and y distances
double xdelta = Math.Abs(Rd*Math.Cos(theta));
double ydelta = Math.Abs(Rd * Math.Sin(theta));
//convert to pixel coordinates
int xd = (int)(xc + (xpos ? xdelta : -xdelta));
int yd = (int)(yc + (ypos ? ydelta : -ydelta));
xd = Math.Max(0, Math.Min(xd, aImage.Width-1));
yd = Math.Max(0, Math.Min(yd, aImage.Height-1));
//set the corrected pixel value from the distorted image
correctedImage.SetPixel(i, j, aImage.GetPixel(xd, yd));
}
}
return correctedImage;
}
}
I found this pdf file and I have proved that the maths are correct (except for the line vd = *xd**fv+v0 which should say vd = **yd**+fv+v0).
http://perception.inrialpes.fr/CAVA_Dataset/Site/files/Calibration_OpenCV.pdf
It does not use all of the latest co-efficients that OpenCV has available but I am sure that it could be adapted fairly easily.
double k1 = cameraIntrinsic.distortion[0];
double k2 = cameraIntrinsic.distortion[1];
double p1 = cameraIntrinsic.distortion[2];
double p2 = cameraIntrinsic.distortion[3];
double k3 = cameraIntrinsic.distortion[4];
double fu = cameraIntrinsic.focalLength[0];
double fv = cameraIntrinsic.focalLength[1];
double u0 = cameraIntrinsic.principalPoint[0];
double v0 = cameraIntrinsic.principalPoint[1];
double u, v;
u = thisPoint->x; // the undistorted point
v = thisPoint->y;
double x = ( u - u0 )/fu;
double y = ( v - v0 )/fv;
double r2 = (x*x) + (y*y);
double r4 = r2*r2;
double cDist = 1 + (k1*r2) + (k2*r4);
double xr = x*cDist;
double yr = y*cDist;
double a1 = 2*x*y;
double a2 = r2 + (2*(x*x));
double a3 = r2 + (2*(y*y));
double dx = (a1*p1) + (a2*p2);
double dy = (a3*p1) + (a1*p2);
double xd = xr + dx;
double yd = yr + dy;
double ud = (xd*fu) + u0;
double vd = (yd*fv) + v0;
thisPoint->x = ud; // the distorted point
thisPoint->y = vd;
This can be solved as an optimization problem. Simply draw on curves in images that are supposed to be straight lines. Store the contour points for each of those curves. Now we can solve the fish eye matrix as a minimization problem. Minimize the curve in points and that will give us a fisheye matrix. It works.
It can be done manually by adjusting the fish eye matrix using trackbars! Here is a fish eye GUI code using OpenCV for manual calibration.
Related
I am interested in building a hexagonal Torus using a mesh of points?
I think I can start with a 2-d polygon, and then iterate 360 times (1 deg resolution) to build a complete solid.
Is this the best way to do this? What I'm really after is building wing profiles with variable cross section geometry over it's span.
In Your way You can do this with polyhedron(). Add an appropriate number of points per profile in defined order to a vector „points“, define faces by the indices of the points in a second vector „faces“ and set both vectors as parameter in polyhedron(), see documentation. You can control the quality of the surface by the number of points per profile and the distance between the profiles (sectors in torus).
Here an example code:
// parameter:
r1 = 20; // radius of torus
r2 = 4; // radius of polygon/ thickness of torus
s = 360; // sections per 360 deg
p = 6; // points on polygon
a = 30; // angle of the first point on Polygon
// points on cross-section
// angle = 360*i/p + startangle, x = r2*cos(angle), y = 0, z = r2*sin(angle)
function cs_point(i) = [r1 + r2*cos(360*i/p + a), 0, r2*sin(360*i/p + a)];
// returns to the index in the points - vector the section number and the number of the point on this section
function point_index(i) = [floor(i/p), i - p*floor(i/p)];
// returns the points x-, y-, z-coordinates by rotatating the corresponding point from crossection around the z-axis
function iterate_cs(i) = [cs[point_index(i)[1]][0]*cos(360*floor(i/p)/s), cs[point_index(i)[1]][0]*sin(360*floor(i/p)/s), cs[point_index(i)[1]][2]];
// for every point find neighbour points to build faces, ( + p: point on the next cross-section), points ordered clockwise
// to connect point on last section to corresponding points on first section
function item_add1(i) = i >= (s - 1)*p ? -(s)*p : 0;
// to connect last point on section to first points on the same and the next section
function item_add2(i) = i - p*floor(i/p) >= p-1 ? -p : 0;
// build faces
function find_neighbours1(i) = [i, i + 1 + item_add2(i), i + 1 + item_add2(i) + p + item_add1(i)];
function find_neighbours2(i) = [i, i + 1 + + item_add2(i) + p + item_add1(i), i + p + item_add1(i)];
cs = [for (i = [0:p-1]) cs_point(i)];
points = [for (i = [0:s*p - 1]) iterate_cs(i)];
faces1 = [for (i = [0:s*p - 1]) find_neighbours1(i)];
faces2 = [for (i = [0:s*p - 1]) find_neighbours2(i)];
faces = concat(faces1, faces2);
polyhedron(points = points, faces = faces);
here the result:
Since openscad 2015-03 faces can have more than 3 points, if all points of the face are on the same plane. So in this case faces could be build in one step too.
Are you building smth. like NACA airfoils? https://en.wikipedia.org/wiki/NACA_airfoil
There are a few OpenSCAD designs for those floating around, see e.g. https://www.thingiverse.com/thing:898554
I'm trying to calculate the principal axis via principal component analysis. I have a pointcloud and use for this the Point Cloud Library (pcl). Furthermore, I try to visualize the principal axis I calculated in rviz with markers. Here is the code snipped I use:
void computePrincipalAxis(const PointCloud& cloud, Eigen::Vector4f& centroid, Eigen::Matrix3f& evecs, Eigen::Vector3f& evals) {
Eigen::Matrix3f covariance_matrix;
pcl::computeCovarianceMatrix(cloud, centroid, covariance_matrix);
pcl::eigen33(covariance_matrix, evecs, evals);
}
void createArrowMarker(Eigen::Vector3f& vec, int id, double length) {
visualization_msgs::Marker marker;
marker.header.frame_id = frameId;
marker.header.stamp = ros::Time();
marker.id = id;
marker.type = visualization_msgs::Marker::ARROW;
marker.action = visualization_msgs::Marker::ADD;
marker.pose.position.x = centroid[0];
marker.pose.position.y = centroid[1];
marker.pose.position.z = centroid[2];
marker.pose.orientation.x = vec[0];
marker.pose.orientation.y = vec[1];
marker.pose.orientation.z = vec[2];
marker.pose.orientation.w = 1.0;
marker.scale.x = length;
marker.scale.y = 0.02;
marker.scale.z = 0.02;
marker.color.a = 1.0;
marker.color.r = 1.0;
marker.color.g = 1.0;
marker.color.b = 0.0;
featureVis.markers.push_back(marker);
}
Eigen::Vector4f centroid;
Eigen::Matrix3f evecs;
Eigen::Vector3f evals;
// Table is the pointcloud of the table only.
pcl::compute3DCentroid(*table, centroid);
computePrincipalAxis(*table, centroid, evecs, evals);
Eigen::Vector3f vec;
vec << evecs.col(0);
createArrowMarker(vec, 1, evals[0]);
vec << evecs.col(1);
createArrowMarker(vec, 2, evals[1]);
vec << evecs.col(2);
createArrowMarker(vec, 3, evals[2]);
publish();
This results in the following visualization:
I'm aware that the scale is not very perfect. The two longer arrows are much too long. But I'm confused about a few things:
I think the small arrow should go either up, or downwards.
What does the value orientation.w of the arrow's orientation mean?
Do you have some hints what I did wrong?
Orientations are represented by Quaternions in ROS, not by directional vectors. Quaternions can be a bit unintuitive, but fortunately there are some helper functions in the tf package, to generate quaternions, for example, from roll/pitch/yaw-angles.
One way to fix the marker would therefore be, to convert the direction vector into a quaternion.
In your special case, there is a much simpler solution, though: Instead of setting origin and orientation of the arrow, it is also possible to define start and end point (see ROS wiki about marker types). So instead of setting the pose attribute, just add start and end point to the points attribute:
float k = 1.0; // optional to scale the length of the arrows
geometry_msgs::Point p;
p.x = centroid[0];
p.y = centroid[1];
p.z = centroid[2];
marker.points.push_back(p);
p.x += k * vec[0];
p.y += k * vec[1];
p.z += k * vec[2];
marker.points.push_back(p);
You can set k to some value < 1 to reduce the length of the arrows.
Currently, i'm working on an offline signature verification. So, what i want to do now is to find the specific value of rotation for that particular signature image. In this case, i want to normalize the baseline of the signature along with the horizontal axis:
.
I've tried to do some coding,
v1(1) = column2 - column1;
v1(2) = row2 - row1;
v2(1) = column2 - column1;
v2(2) = row1 - row1;
x1 = v1(1);
y1 = v1(2);
x2 = v2(1);
y2 = v2(2);
dotproduct = (x1*x2 + y1*y2);
v1mag = sqrt(x1*x1 + y1*y1);
v2mag = sqrt(x2*x2 + y2*y2);
costheta = dotproduct/(v1mag*v2mag);
angle = acos(costheta);
angleDeg = rad2deg(angle);
angleDeg = uint8(angleDeg);
angleDeg
%B = imrotate(invImg,-(angleDeg),'bilinear');
As you can see from the coding, variable 'angleDeg' holds the value of rotation angle. Before, I've used imrotate() MATLAB but the problem is that I must input the value of the angle manually instead of calling the variable of 'angleDeg'. Is there any other method/algorithm to rotate an image by calling the variable that holds the angle value beside imrotate()?
Why do you need to input the angle manually? Why do you cast angleDeg to uint8?
By the way, the function regionprops can give you the 'Orientation' of the connected component in a binary image, i. e. the angle between the major axis of the component and the horizontal.
I'm trying to create a dataset of raw volumetric data consisting of geometrical shapes. The point is to use volume ray casting to project them in 2D but first I want to create the volume manually.
The geometry is consisting of one cylinder that is in the middle of the volume, along the Z axis and 2 smaller cylinders that are around the first one, deriving from rotations around the axes.
Here is my function so far:
function cyl= createCylinders(a, b, c, rad1, h1, rad2, h2)
% a : data width
% b : data height
% c : data depth
% rad1: radius of the big center cylinder
% rad2: radius of the smaller cylinders
% h1: height of the big center cylinder
% h2: height of the smaller cylinders
[Y X Z] =meshgrid(1:a,1:b,1:c); %matlab saves in a different order so X must be Y
centerX = a/2;
centerY = b/2;
centerZ = c/2;
theta = 0; %around y
fi = pi/4; %around x
% First cylinder
cyl = zeros(a,b,c);
% create for infinite height
R = sqrt((X-centerX).^2 + (Y-centerY).^2);
startZ = ceil(c/2) - floor(h1/2);
endZ = startZ + h1 - 1;
% then trim it to height = h1
temp = zeros(a,b,h1);
temp( R(:,:,startZ:endZ)<rad1 ) = 255;
cyl(:,:,startZ:endZ) = temp;
% Second cylinder
cyl2 = zeros(a,b,c);
A = (X-centerX)*cos(theta) + (Y-centerY)*sin(theta)*sin(fi) + (Z-centerZ)*cos(fi)*sin(theta);
B = (Y-centerY)*cos(fi) - (Z-centerZ)*sin(fi);
% create again for infinite height
R2 = sqrt(A.^2+B.^2);
cyl2(R2<rad2) = 255;
%then use 2 planes to trim outside of the limits
N = [ cos(fi)*sin(theta) -sin(fi) cos(fi)*cos(theta) ];
P = (rad2).*N + [ centerX centerY centerZ];
T = (X-P(1))*N(1) + (Y-P(2))*N(2) + (Z-P(3))*N(3);
cyl2(T<0) = 0;
P = (rad2+h2).*N + [ centerX centerY centerZ];
T = (X-P(1))*N(1) + (Y-P(2))*N(2) + (Z-P(3))*N(3);
cyl2(T>0) = 0;
% Third cylinder
% ...
cyl = cyl + cyl2;
cyl = uint8(round(cyl));
% ...
The concept is that the first cylinder is created and then "cut" according to the z-axis value, to define its height. The other cylinder is created using the relation A2 + B 2 = R2 where A and B are rotated accordingly using the rotation matrices only around x and y axes, using Ry(θ)Rx(φ) as described here.
Until now everything seems to be working, because I have implemented code (tested that it works well) to display the projection and the cylinders seem to have correct rotation when they are not "trimmed" from infinite height.
I calculate N which is the vector [0 0 1] aka z-axis rotated in the same way as the cylinder. Then I find two points P of the same distances that I want the cylinder's edges to be and calculate the plane equations T according to that points and normal vector. Lastly, I trim according to that equality. Or at least that's what I think I'm doing, because after the trimming I usually don't get anything (every value is zero). Or, the best thing I could get when I was experimenting was cylinders trimmed, but the planes of the top and bottom where not oriented well.
I would appreciate any help or corrections at my code, because I've been looking at the geometry equations and I can't find where the mistake is.
Edit:
This is a quick screenshot of the object I'm trying to create. NOTE that the cylinders are opaque in the volume data, all the inside is considered as homogeneous material.
I think instead of:
T = (X-P(1))*N(1) + (Y-P(2))*N(2) + (Z-P(3))*N(3);
you should try the following at both places:
T = (X-P(1)) + (Y-P(2)) + (Z-P(3));
Multiplying by N is to account for the direction of the axis of the 2nd cylinder which you have already done just above that step.
I have been using the Hough transform in my application both using Matlab and OpenCV/labview and found that for some images, the hough transform gave an obviously wrong line fit (consistently)
Here are the test and overlayed images. The angle seem right, but the rho is off.
On the image below, you will see the top image tries to fit a line to the left side of the original image and the bottom image fits a line to the right side of the image.
In Matlab, I call the Hough function through
[H1D,theta1D,rho1D] = hough(img_1D_dilate,'ThetaResolution',0.2);
in C++, i trimmed the OpenCV HoughLines function so I end up with only the part we are filling the accumulator. Note that because my theta resolution is 0.2, I have 900 angles to analyze. The tabSin and tabCos are defined prior to the function so that they are just a sin and cos of the angle.
Note that these routines generally work well, but just for specific cases it performs the way I have shown.
double start_angle = 60.0;
double end_angle = 120.0;
double num_theta = 180;
int start_ang = num_theta * start_angle/180;
int end_ang = num_theta * end_angle/180;
int i,j,n,index;
for (i = 0;i<numrows;i++)
{
for (j = 0;j<numcols;j++)
{
if (img[i*numcols + j] == 100)
{
for (n = 0;n<180;n++)
{
index = cvRound((j*tabCos[n] + i * tabSin[n])) + (numrho-1)/2;
accum[(n+1) * (numrho+2) + index+1]++;
}
}
}
}
TabCos and tabSin are defined in Labview with this code
int32 i;
float64 theta_prec;
float64 tabSin[180];
float64 tabCos[180];
theta_prec = 1/180*3.14159;
for (i = 0;i<180;i++)
{
tabSin[i] = sin(itheta_prec);
tabCos[i] = cos(itheta_prec);
}
any suggestions would be greatly appreciated
I guess i'll put down the answer to this problem.
I was converting the rho and theta into m and b, then computing the values of x and y from the m and b. I believe this may have caused some precision error somewhere.
this error was fixed by obtaining x and y directly from rho and theta rather than going through m and b.
the function is
y = -cos(theta)/sin(theta)*x + rho/sin(theta);