Matching RGB image with point cloud - matlab

I have an RGB image and a point cloud acquired by LIDAR.
In the RGB image I detect a feature, let's say a circle.
I want to use this circle as a ROI in my 3d point cloud.
How can I do that? I was thinking to produce a 3d point cloud from the RGB image through the camera parameters and then match the 2 with icp algorithm.
The problem's that on the moment I produce the point cloud from the 2D image, my coordinates system change, so I don't know anymore the position of my circle.
To perform 3d reconstruction I use triangulateMultiview function

I was thinking to produce a 3d point cloud from the RGB image through the camera parameters and then match the 2 with icp algorithm.
-> this would not work and not efficient.
Actually, there is a much better way. Assuming that you know the extrinsic between the camera and lidar, any circle(or ellipse) on the image can be extended into a 3d cone using the camera intrinsic and by selecting the points within the cone you can do the ROI operation.
Let's say you can define an ellipse on your image plane by detecting and finding the parameters of an ellipse equation. The ellipse equation can be extended into the quadric(cone) equation which representing the 3D cone. Now the only thing left is testing if your 3d point is within the cone by putting the cone equation.
This is a mathematically little bit complicated problem if you are not comfortable with camera model or quadric equation.

Related

Is it possible to find the depth of an internal point of an object using stereo images (or any other method)?

I have image of robot with yellow markers as shown
The yellow points shown are the markers. There are two cameras used to view placed at an offset of 90 degrees. The robot bends in between the cameras. The crude schematic of the setup can be referred.
https://i.stack.imgur.com/aVyDq.png
Using the two cameras I am able to get its 3d co-ordinates of the yellow markers. But, I need to find the 3d-co-oridnates of the central point of the robot as shown.
I need to find the 3d position of the red marker points which is inside the cylindrical robot. Firstly, is it even feasible? If yes, what is the method I can use to achieve this?
As a bonus, is there any literature where they find the 3d location of such internal points which I can refer to (I searched, but could not find anything similar to my ask).
I am welcome to a theoretical solution as well(as long as it assures to find the central point within a reasonable error), which I can later translate to code.
If you know the actual dimensions, or at least, shape (e.g. perfect circle) of the white bands, then yes, it is feasible and possible.
You need to do the following steps, which are quite non trivial to do, and I won't do them here:
Optional but extremely suggested: calibrate your camera, and
undistort it.
find the equation of the projection of a 3D circle into a 2D camera, for any given rotation. You can simplify this by assuming the white line will be completely horizontal. You want some function that takes the parameters that make a circle and a rotation.
Find all white bands in the image, segment them, and make them horizontal (rotate them)
Fit points in the corrected white circle to the equation in (1). That should give you the parameters of the circle in 3d (radious, angle), if you wrote the equation right.
Now that you have an analytic equation of the actual circle (equation from 1 with parameters from 3), you can map any point from this circle (e.g. its center) to the image location. Remember to uncorrect for the rotations in step 2.
This requires understanding of curve fitting, some geometric analytical maths, and decent code skills. Not trivial, but this will provide a solution that is highly accurate.
For an inaccurate solution:
Find end points of white circles
Make line connecting endpoints
Chose center as mid point of this line.
This will be inaccurate because: choosing end points will have more error than fitting an equation with all points, ignores cone shape of view of the camera, ignores geometry.
But it may be good enough for what you want.
I have been able to extract the midpoint by fitting an ellipse to the arc visible to the camera. The centroid of the ellipse is the required midpoint.
There will be wrong ellipses as well, which can be ignored. The steps to extract the ellipse were:
Extract the markers
Binarise and skeletonise
Fit ellipse to the arc (found a matlab function for this)
Get the centroid of the ellipse
hsv_img=rgb2hsv(im);
bin=new_hsv_img(:,:,3)>marker_th; %was chosen 0.35
%skeletonise
skel=bwskel(bin);
%use regionprops to get the pixelID list
stats=regionprops(skel,'all');
for i=1:numel(stats)
el = fit_ellipse(stats(i).PixelList(:,1),stats(i).PixelList(:,2));
ellipse_draw(el.a, el.b, -el.phi, el.X0_in, el.Y0_in, 'g');
The link for fit_ellipse function
Link for ellipse_draw function

How to transform a non-planar surface on a plane using a pair of 2D and 3D control points?

I have a set of control point pairs. One part of the pair is in world coordinates (3D). The other one is in pixel corrdinates of the image (2D).
My goal is to transform a surface you can see in this image onto a flat plane. The problem is that the surface is not perfectly flat, it kinda looks like a ribbon. Otherwise I could have used OpenCV's getPerspectiveTransform() or Matlab's fitgeotrans().
I know that I can use OpenCV's solvePnP() or Matlab's estimateWorldCameraPose() to get the pose of the camera. The camera matrix is known and the image is rectified. But what is the next step then? How can I transform my ribbon shaped surface onto a flat plane, i.e. get an orthographic top view? That is the step, I'm stuck on.

Volume reconstruction from 3D image gradient in Matlab

For 2D images- Gx and Gy gives the information on vertical and horizontal edge infromation respectively. Angle of the gradient direction vector can be calculated from inverse of tan(Gy/Gx) and edge direction will be perpendicular to the gradient direction vector.
I have 3D Z-stacked image datset so each pixel will be represented by (x,y,z) co-ordinate and with respective intensity value as well. I have used 3D image gradient link for initial reference.
(1) What if I want to derive whole volume/wireframe model just with the information of 3D image gradient magnitude and direction?
(2) Do I need X, Y and Z-stacked image data seperately in order to generate whole volume/wireframe model out of it?
In matlab, I can also develop whole volume just with 2D z-stacked masks using isosurface command. Here I am exploring other possibilities to generate wireframe volume/3D model.
Thanking you in anticipation.

How to convert 3D point cloud (extracted from 3D sparse reconstruction) to millimeters?

Using Stereo vision and based on Multiple View Geometry book (http://www.robots.ox.ac.uk/~vgg/hzbook/), I have created a 3D point cloud in MATLAB. To do that, I first calibrated the cameras and rectified the stereo images. Then feature extraction and matching. Then eliminated the noisy matched based on camera locations. Finally created the 3D point cloud using triangulation.
Now my question is how to convert this 3D point cloud from pixel domain to actual millimeter/centimeter domain knowing my focal length and camera calibration matrices?
the goal is to find DEPTH IN MILLIMETERS.
I know how to do it in disparity/depth map case using formula: Z=(t*f)/d.
But here in the sparse case, can I do something like this? http://matlab.wikia.com/wiki/FAQ#How_do_I_measure_a_distance_or_area_in_real_world_units_instead_of_in_pixels.3F
or there is a more sophisticated method with more in depth explanation?
Thanks.
The formula you wrote is valid only in the special case when the image planes of the two cameras are on the same geometrical plane, and the motion from one to the other is a translation parallel to one of the image axes.
In the general case you'll need to triangulate actual rays in 3D space, using one of the techniques described in that book (it has a whole chapter on reconstruction). The reconstruction will be metrical if your calibration is. In particular, if the coordinate transform between the cameras has a translation vector whose units are meters (or millimeters, or inches, ...).

Binary mask in image space from point cloud data

Hy!
I have a point cloud representing walls and the floor of an indoor scene.
I projected the points of the floor on a plane. That means it's a "2d point cloud" now.
All z-coordinates are zero.
I have to deal with missing parts. My idea now is to fill the holes.
Is there a way to transform the points into the image space to create a
binary mask? I would like to use techniques like: "imfill" in Matlab...
Thanks
Edit:
To make it more clear, I will explain an simple example. I have points in 2D. After I made a triangulation, I can access each triangle. For each triangle I create a binary mask with poly2mask(), and each mask I write to an final image.
Here is an example:
Now I can use morphological operations on the image.
E.G: Here is an more complex example, where the triangulation gives me bad results:
To fill the hole on the right side, I could use morphological operation.
My problem: The points can be negative, and the distance between the triangles can be very small (E.g.: x coordinates of triangle: (1.12 1.14 1.12), will give me the point 1 in the image space