Plane extraction in 3D data - matlab

What is the best way to:
1. Detect a plane in a set of 3D points with high noise around part of it (not all the points)?
Extract plane equation of two intersection planes in 3D data.
Thanks in advance

Use the pcfitplane function in the Computer Vision System Toolbox.

Related

Matching RGB image with point cloud

I have an RGB image and a point cloud acquired by LIDAR.
In the RGB image I detect a feature, let's say a circle.
I want to use this circle as a ROI in my 3d point cloud.
How can I do that? I was thinking to produce a 3d point cloud from the RGB image through the camera parameters and then match the 2 with icp algorithm.
The problem's that on the moment I produce the point cloud from the 2D image, my coordinates system change, so I don't know anymore the position of my circle.
To perform 3d reconstruction I use triangulateMultiview function
I was thinking to produce a 3d point cloud from the RGB image through the camera parameters and then match the 2 with icp algorithm.
-> this would not work and not efficient.
Actually, there is a much better way. Assuming that you know the extrinsic between the camera and lidar, any circle(or ellipse) on the image can be extended into a 3d cone using the camera intrinsic and by selecting the points within the cone you can do the ROI operation.
Let's say you can define an ellipse on your image plane by detecting and finding the parameters of an ellipse equation. The ellipse equation can be extended into the quadric(cone) equation which representing the 3D cone. Now the only thing left is testing if your 3d point is within the cone by putting the cone equation.
This is a mathematically little bit complicated problem if you are not comfortable with camera model or quadric equation.

generating irregular mesh of a cube

I want to generate a mesh for solving the Laplace equation on using FEM. I want to solve it on a cube and I would like to mesh the cube using tetrahedra. Is there a way to do this in MATLAB using unstructured meshes, i.e. that I can choose if the density if elements should be higher in a specific region?

How to convert 3D point cloud (extracted from 3D sparse reconstruction) to millimeters?

Using Stereo vision and based on Multiple View Geometry book (http://www.robots.ox.ac.uk/~vgg/hzbook/), I have created a 3D point cloud in MATLAB. To do that, I first calibrated the cameras and rectified the stereo images. Then feature extraction and matching. Then eliminated the noisy matched based on camera locations. Finally created the 3D point cloud using triangulation.
Now my question is how to convert this 3D point cloud from pixel domain to actual millimeter/centimeter domain knowing my focal length and camera calibration matrices?
the goal is to find DEPTH IN MILLIMETERS.
I know how to do it in disparity/depth map case using formula: Z=(t*f)/d.
But here in the sparse case, can I do something like this? http://matlab.wikia.com/wiki/FAQ#How_do_I_measure_a_distance_or_area_in_real_world_units_instead_of_in_pixels.3F
or there is a more sophisticated method with more in depth explanation?
Thanks.
The formula you wrote is valid only in the special case when the image planes of the two cameras are on the same geometrical plane, and the motion from one to the other is a translation parallel to one of the image axes.
In the general case you'll need to triangulate actual rays in 3D space, using one of the techniques described in that book (it has a whole chapter on reconstruction). The reconstruction will be metrical if your calibration is. In particular, if the coordinate transform between the cameras has a translation vector whose units are meters (or millimeters, or inches, ...).

Matlab: From Disparity Map to 3D coordinates

I copied the matlab code from: http://www.mathworks.fr/fr/help/vision/ug/stereo-image-rectification.html
I can compute the 3D coordinates but I am not sure if it is the correct one.
Starting from the disparity map and calculating the 3D coordinates, how do we take into account of the warping tform1 and tform2?
The problem here is that you are using uncalibrated cameras. In this case you can get up-to-scale reconstruction, but if you want the 3D points in world units, you would need to know actual distances to some points in the world.
I think you would be better off calibrating your stereo system. Please see this example.

Multiview 3D reconstruction

I tried to do 3D reconstruction of multiple views by using multiview essential matrices to construct 3D view of each image view of object. However, I am shocked that the 3D points I found are all on about XY plane. I guess that it maybe regarding to the large value of essential matrix or weird number of projection matrix estimated. What are the suggestions for me to compute precise 3D points coordinate?
If you have the Computer Vision System Toolbox, this example may be helpful.